pax_global_header00006660000000000000000000000064146255263000014515gustar00rootroot0000000000000052 comment=2bbd82625406a601add2182ad087aaeae0d1d701 q2cli-2024.5.0/000077500000000000000000000000001462552630000127615ustar00rootroot00000000000000q2cli-2024.5.0/.gitattributes000066400000000000000000000000371462552630000156540ustar00rootroot00000000000000q2cli/_version.py export-subst q2cli-2024.5.0/.github/000077500000000000000000000000001462552630000143215ustar00rootroot00000000000000q2cli-2024.5.0/.github/CONTRIBUTING.md000066400000000000000000000015131462552630000165520ustar00rootroot00000000000000# Contributing to this project Thanks for thinking of us :heart: :tada: - we would love a helping hand! ## I just have a question > Note: Please don't file an issue to ask a question. You'll get faster results > by using the resources below. ### QIIME 2 Users Check out the [User Docs](https://docs.qiime2.org) - there are many tutorials, walkthroughs, and guides available. If you still need help, please visit us at the [QIIME 2 Forum](https://forum.qiime2.org/c/user-support). ### QIIME 2 Developers Check out the [Developer Docs](https://dev.qiime2.org) - there are many tutorials, walkthroughs, and guides available. If you still need help, please visit us at the [QIIME 2 Forum](https://forum.qiime2.org/c/dev-discussion). This document is based heavily on the following: https://github.com/atom/atom/blob/master/CONTRIBUTING.md q2cli-2024.5.0/.github/ISSUE_TEMPLATE/000077500000000000000000000000001462552630000165045ustar00rootroot00000000000000q2cli-2024.5.0/.github/ISSUE_TEMPLATE/1-user-need-help.md000066400000000000000000000006111462552630000217770ustar00rootroot00000000000000--- name: I am a user and I need help with QIIME 2... about: I am using QIIME 2 and have a question or am experiencing a problem --- Have you had a chance to check out the docs? https://docs.qiime2.org There are many tutorials, walkthroughs, and guides available. If you still need help, please visit: https://forum.qiime2.org/c/user-support Help requests filed here will not be answered. q2cli-2024.5.0/.github/ISSUE_TEMPLATE/2-dev-need-help.md000066400000000000000000000005641462552630000216070ustar00rootroot00000000000000--- name: I am a developer and I need help with QIIME 2... about: I am developing a QIIME 2 plugin or interface and have a question or a problem --- Have you had a chance to check out the developer docs? https://dev.qiime2.org There are many tutorials, walkthroughs, and guides available. If you still need help, please visit: https://forum.qiime2.org/c/dev-discussion q2cli-2024.5.0/.github/ISSUE_TEMPLATE/3-found-bug.md000066400000000000000000000017421462552630000210600ustar00rootroot00000000000000--- name: I am a developer and I found a bug... about: I am a developer and I found a bug that I can describe --- **Bug Description** A clear and concise description of what the bug is. **Steps to reproduce the behavior** 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Computation Environment** - OS: [e.g. macOS High Sierra] - QIIME 2 Release [e.g. 2018.6] **Questions** 1. An enumerated list with any questions about the problem here. 2. If not applicable, please delete this section. **Comments** 1. An enumerated list with any other context or comments about the problem here. 2. If not applicable, please delete this section. **References** 1. An enumerated list of links to relevant references, including forum posts, stack overflow, etc. 2. If not applicable, please delete this section. q2cli-2024.5.0/.github/ISSUE_TEMPLATE/4-make-better.md000066400000000000000000000015321462552630000213700ustar00rootroot00000000000000--- name: I am a developer and I have an idea for an improvement... about: I am a developer and I have an idea for an improvement to existing functionality --- **Improvement Description** A clear and concise description of what the improvement is. **Current Behavior** Please provide a brief description of the current behavior. **Proposed Behavior** Please provide a brief description of the proposed behavior. **Questions** 1. An enumerated list of questions related to the proposal. 2. If not applicable, please delete this section. **Comments** 1. An enumerated list of comments related to the proposal that don't fit anywhere else. 2. If not applicable, please delete this section. **References** 1. An enumerated list of links to relevant references, including forum posts, stack overflow, etc. 2. If not applicable, please delete this section. q2cli-2024.5.0/.github/ISSUE_TEMPLATE/5-make-new.md000066400000000000000000000015131462552630000206740ustar00rootroot00000000000000--- name: I am a developer and I have an idea for a new feature... about: I am a developer and I have an idea for new functionality --- **Addition Description** A clear and concise description of what the addition is. **Current Behavior** Please provide a brief description of the current behavior, if applicable. **Proposed Behavior** Please provide a brief description of the proposed behavior. **Questions** 1. An enumerated list of questions related to the proposal. 2. If not applicable, please delete this section. **Comments** 1. An enumerated list of comments related to the proposal that don't fit anywhere else. 2. If not applicable, please delete this section. **References** 1. An enumerated list of links to relevant references, including forum posts, stack overflow, etc. 2. If not applicable, please delete this section. q2cli-2024.5.0/.github/ISSUE_TEMPLATE/6-where-to-go.md000066400000000000000000000100111462552630000213170ustar00rootroot00000000000000--- name: I don't know where to file my issue... about: I am a developer and I don't know which repo to file this in --- The repos within the QIIME 2 GitHub Organization are listed below, with a brief description about the repo. Sorted alphabetically by repo name. - The CI automation engine that builds and distributes QIIME 2 https://github.com/qiime2/busywork/issues - A Concourse resource for working with conda https://github.com/qiime2/conda-channel-resource/issues - Web app for vanity URLs for QIIME 2 data assets https://github.com/qiime2/data.qiime2.org/issues - The Developer Documentation https://github.com/qiime2/dev-docs/issues - A discourse plugin for handling queued/unqueued topics https://github.com/qiime2/discourse-unhandled-tagger/issues - The User Documentation https://github.com/qiime2/docs/issues - Rendered QIIME 2 environment files for conda https://github.com/qiime2/environment-files/issues - Google Sheets Add-On for validating tabular data https://github.com/qiime2/Keemei/issues - A docker image for linux-based busywork workers https://github.com/qiime2/linux-worker-docker/issues - Official project logos https://github.com/qiime2/logos/issues - The q2-alignment plugin https://github.com/qiime2/q2-alignment/issues - The q2-composition plugin https://github.com/qiime2/q2-composition/issues - The q2-cutadapt plugin https://github.com/qiime2/q2-cutadapt/issues - The q2-dada2 plugin https://github.com/qiime2/q2-dada2/issues - The q2-deblur plugin https://github.com/qiime2/q2-deblur/issues - The q2-demux plugin https://github.com/qiime2/q2-demux/issues - The q2-diversity plugin https://github.com/qiime2/q2-diversity/issues - The q2-diversity-lib plugin https://github.com/qiime2/q2-diversity-lib/issues - The q2-emperor plugin https://github.com/qiime2/q2-emperor/issues - The q2-feature-classifier plugin https://github.com/qiime2/q2-feature-classifier/issues - The q2-feature-table plugin https://github.com/qiime2/q2-feature-table/issues - The q2-fragment-insertion plugin https://github.com/qiime2/q2-fragment-insertion/issues - The q2-gneiss plugin https://github.com/qiime2/q2-gneiss/issues - The q2-longitudinal plugin https://github.com/qiime2/q2-longitudinal/issues - The q2-metadata plugin https://github.com/qiime2/q2-metadata/issues - The q2-phylogeny plugin https://github.com/qiime2/q2-phylogeny/issues - The q2-quality-control plugin https://github.com/qiime2/q2-quality-control/issues - The q2-quality-filter plugin https://github.com/qiime2/q2-quality-filter/issues - The q2-sample-classifier plugin https://github.com/qiime2/q2-sample-classifier/issues - The q2-shogun plugin https://github.com/qiime2/q2-shogun/issues - The q2-taxa plugin https://github.com/qiime2/q2-taxa/issues - The q2-types plugin https://github.com/qiime2/q2-types/issues - The q2-vsearch plugin https://github.com/qiime2/q2-vsearch/issues - The CLI interface https://github.com/qiime2/q2cli/issues - The prototype CWL interface https://github.com/qiime2/q2cwl/issues - The prototype Galaxy interface https://github.com/qiime2/q2galaxy/issues - An internal tool for ensuring header text and copyrights are present https://github.com/qiime2/q2lint/issues - The prototype GUI interface https://github.com/qiime2/q2studio/issues - A base template for use in official QIIME 2 plugins https://github.com/qiime2/q2templates/issues - The read-only web interface at view.qiime2.org https://github.com/qiime2/q2view/issues - The QIIME 2 homepage at qiime2.org https://github.com/qiime2/qiime2.github.io/issues - The QIIME 2 framework https://github.com/qiime2/qiime2/issues - Centralized templates for repo assets https://github.com/qiime2/template-repo/issues - Scripts for building QIIME 2 VMs https://github.com/qiime2/vm-playbooks/issues - Scripts for building QIIME 2 workshop clusters https://github.com/qiime2/workshop-playbooks/issues - The web app that runs workshops.qiime2.org https://github.com/qiime2/workshops.qiime2.org/issues q2cli-2024.5.0/.github/SUPPORT.md000066400000000000000000000122421462552630000160200ustar00rootroot00000000000000# QIIME 2 Users Check out the [User Docs](https://docs.qiime2.org) - there are many tutorials, walkthroughs, and guides available. If you still need help, please visit us at the [QIIME 2 Forum](https://forum.qiime2.org/c/user-support). # QIIME 2 Developers Check out the [Developer Docs](https://dev.qiime2.org) - there are many tutorials, walkthroughs, and guides available. If you still need help, please visit us at the [QIIME 2 Forum](https://forum.qiime2.org/c/dev-discussion). # General Bug/Issue Triage Discussion ![rubric](./rubric.png?raw=true) # Projects/Repositories in the QIIME 2 GitHub Organization Sorted alphabetically by repo name. - [busywork](https://github.com/qiime2/busywork/issues) | The CI automation engine that builds and distributes QIIME 2 - [conda-channel-resource](https://github.com/qiime2/conda-channel-resource/issues) | A Concourse resource for working with conda - [data.qiime2.org](https://github.com/qiime2/data.qiime2.org/issues) | Web app for vanity URLs for QIIME 2 data assets - [dev-docs](https://github.com/qiime2/dev-docs/issues) | The Developer Documentation - [discourse-unhandled-tagger](https://github.com/qiime2/discourse-unhandled-tagger/issues) | A discourse plugin for handling queued/unqueued topics - [docs](https://github.com/qiime2/docs/issues) | The User Documentation - [environment-files](https://github.com/qiime2/environment-files/issues) | Rendered QIIME 2 environment files for conda - [Keemei](https://github.com/qiime2/Keemei/issues) | Google Sheets Add-On for validating tabular data - [linux-worker-docker](https://github.com/qiime2/linux-worker-docker/issues) | A docker image for linux-based busywork workers - [logos](https://github.com/qiime2/logos/issues) | Official project logos - [q2-alignment](https://github.com/qiime2/q2-alignment/issues) | The q2-alignment plugin - [q2-composition](https://github.com/qiime2/q2-composition/issues) | The q2-composition plugin - [q2-cutadapt](https://github.com/qiime2/q2-cutadapt/issues) | The q2-cutadapt plugin - [q2-dada2](https://github.com/qiime2/q2-dada2/issues) | The q2-dada2 plugin - [q2-deblur](https://github.com/qiime2/q2-deblur/issues) | The q2-deblur plugin - [q2-demux](https://github.com/qiime2/q2-demux/issues) | The q2-demux plugin - [q2-diversity](https://github.com/qiime2/q2-diversity/issues) | The q2-diversity plugin - [q2-diversity-lib](https://github.com/qiime2/q2-diversity-lib/issues) | The q2-diversity-lib plugin - [q2-emperor](https://github.com/qiime2/q2-emperor/issues) | The q2-emperor plugin - [q2-feature-classifier](https://github.com/qiime2/q2-feature-classifier/issues) | The q2-feature-classifier plugin - [q2-feature-table](https://github.com/qiime2/q2-feature-table/issues) | The q2-feature-table plugin - [q2-fragment-insertion](https://github.com/qiime2/q2-fragment-insertion/issues) | The q2-fragment-insertion plugin - [q2-gneiss](https://github.com/qiime2/q2-gneiss/issues) | The q2-gneiss plugin - [q2-longitudinal](https://github.com/qiime2/q2-longitudinal/issues) | The q2-longitudinal plugin - [q2-metadata](https://github.com/qiime2/q2-metadata/issues) | The q2-metadata plugin - [q2-phylogeny](https://github.com/qiime2/q2-phylogeny/issues) | The q2-phylogeny plugin - [q2-quality-control](https://github.com/qiime2/q2-quality-control/issues) | The q2-quality-control plugin - [q2-quality-filter](https://github.com/qiime2/q2-quality-filter/issues) | The q2-quality-filter plugin - [q2-sample-classifier](https://github.com/qiime2/q2-sample-classifier/issues) | The q2-sample-classifier plugin - [q2-shogun](https://github.com/qiime2/q2-shogun/issues) | The q2-shogun plugin - [q2-taxa](https://github.com/qiime2/q2-taxa/issues) | The q2-taxa plugin - [q2-types](https://github.com/qiime2/q2-types/issues) | The q2-types plugin - [q2-vsearch](https://github.com/qiime2/q2-vsearch/issues) | The q2-vsearch plugin - [q2cli](https://github.com/qiime2/q2cli/issues) | The CLI interface - [q2cwl](https://github.com/qiime2/q2cwl/issues) | The prototype CWL interface - [q2galaxy](https://github.com/qiime2/q2galaxy/issues) | The prototype Galaxy interface - [q2lint](https://github.com/qiime2/q2lint/issues) | An internal tool for ensuring header text and copyrights are present - [q2studio](https://github.com/qiime2/q2studio/issues) | The prototype GUI interface - [q2templates](https://github.com/qiime2/q2templates/issues) | A base template for use in official QIIME 2 plugins - [q2view](https://github.com/qiime2/q2view/issues) | The read-only web interface at view.qiime2.org - [qiime2.github.io](https://github.com/qiime2/qiime2.github.io/issues) | The QIIME 2 homepage at qiime2.org - [qiime2](https://github.com/qiime2/qiime2/issues) | The QIIME 2 framework - [template-repo](https://github.com/qiime2/template-repo/issues) | Centralized templates for repo assets - [vm-playbooks](https://github.com/qiime2/vm-playbooks/issues) | Scripts for building QIIME 2 VMs - [workshop-playbooks](https://github.com/qiime2/workshop-playbooks/issues) | Scripts for building QIIME 2 workshop clusters - [workshops.qiime2.org](https://github.com/qiime2/workshops.qiime2.org/issues) | The web app that runs workshops.qiime2.org q2cli-2024.5.0/.github/pull_request_template.md000066400000000000000000000006121462552630000212610ustar00rootroot00000000000000Brief summary of the Pull Request, including any issues it may fix using the GitHub closing syntax: https://help.github.com/articles/closing-issues-using-keywords/ Also, include any co-authors or contributors using the GitHub coauthor tag: https://help.github.com/articles/creating-a-commit-with-multiple-authors/ --- Include any questions for reviewers, screenshots, sample outputs, etc. q2cli-2024.5.0/.github/rubric.png000066400000000000000000007014131462552630000163230ustar00rootroot00000000000000PNG  IHDR,4\sBIT|d pHYs.#.#x?vtEXtSoftwarewww.inkscape.org< IDATxw|g3AޫEEQvgmKhQRD.#g$ G)9IIr~vkxr\7ޯ\b0 R1NE/zQ@G E/z'w:~9 .VhhD?^Kg7o78s7HG^{5]~]v ӧOլYD ^-ZٳgW_TB iٲeںufR)//_lRl߾ݤIs8#8ѣGرcٰakQ ___۷Of (p&I8p@sIO5ydXTRZ~ҦMk7oZpDO<1ӧO(8!?0c2dȠs,fݺu_6 8 2?sI @C ޽{uСfIZj& ilϟ߬EILR^DԬYq]]]`I&Ѿ6vUV5o\TD$&~<н{%6p^DGGk֭˓'O/Sڵkhh#^QF &:t\]/TՕ={ve˖M/[l8cǎ%.aÆI ᅲWڴi#F)+)<<\&MҡC8$^dN>mR7n$Y*Toh?^@\UTщ'4tP.]Z3gͫ-[jӦMZf0=ŋMw!;֭ۗ`^q 7|iĚ3gNrN/p6mq$=iӦD Klc׮]K4(z1P ><Ȟ=j֬`gϞ%Y|֋رc; IMMd4k,$ zSLÇ; Id̘Ѭ#GԈ#e\~)z)׉''$w(z6{_uѣ6ͥ\rʔ)SIQh`7ne˖zyr8bŊY4\:t蠀UN/ N%ɝJ,iXQVR 4l0ruw*g͚k׮m+][-Zӧ;8\ !OddgϮGd|k׮ڵ-j9<{Q۶mu=O,XP+VN4ӧOu1*((Haaazː! (E\rJ>}2g`>}ZgΜӧ+W`0_q\\\%K+WN5kT9$ϘԩS:}N:]qSM0A3fPtt(z{xCpBmڴy`0hڿ6mhTMݻw~zmذAғ'OL%Jyj۶*Tę:K.i͚5ڼy;(ƹB jٲڵkB %q҃b[Ew/FGFFjŚ]t<$???EEEI8>vzRѢEuʕ$Uzu |ڷooҽfS5jO?5ڶdXª6m'o׮]2ev!k_?aÆ=zj׮m\֋}I' @;vɓ'M#Oҷ~kl9z:viӦiѢE*W\#""gIh@s(^x큁:tlbUѶm۴m65o\S\̚/oݺufIү_O>5y̍7ԡCi[bV\_:tf͚e.]d$I|'Ol; Kwv=zƍ?~ϯݻ믿Klٲ7oM ^v1UZU~m̟,[L˗iU7oVٲegƭ^Z~~~6/xܹS-Z0~;w֭[Z˗/5-Q'EӮq#""d/_^=z͛7,֊+;tD[ʕf]|Yyn*___b={L}_|amک`ѣյkW=~8...j֬vء7n(22RW^UlYݻwO5?l-v%kNϞ=x_ՆY'{R7'ojΜ9ѣcך5k4zh3Ʀw~-\P}1.q,W\ʕ+5j%KW^&7ne˦}iҘ1c#Gɓ13G*TP`N:PhhN:G***ʤ8UVUժUU^8m CK.Uv^z/_>uI۷ט1c4mڴD犈PvcլYӤչsgX5ٳgmV ,ŋSddU󹹹QF*U1c(,,L>ԑ#Gt9֑iӪv*VMs1X{, ՛0a&N9TREWV맟~RVL*R :T|MK}G&O&L~I/_N:%`0M6Z~IsΝ['OTk׮ԩSڸq:dϟ_|>C3fnܸϟ'8WZo߾ߵh"]~\,Y]&`0رcF>cmݺլ 2) *\Kn߾QF~0+͛7Ox͛75x߿&M$oooQHFi3gNmٲE+Vx /_^>LouIK.ѾϞ=Sum(UN8!wY~ʖ-4ᅰ-[V'N0{ܫݻ9rmsuuՎ;۷QF&iժɅ/dɒ&رمM6K.hLƌg2eѣչs؝D HtTsxչsg?WZJH.]|rLlx.]TىbJKX6mژK@@/^hyjȐ!&oݳhܫ*, 6,ނ׃ԭ[7 ^4j(s*VTbr+Wȑ#&-Zhҥfy!,,쥂Wٵ~7L+O< {zzjŊ*UE:ɓ't5egbΓ1cd%  ֐!C}xﻱQFrcWX=c^UNO6ͤЀ>}z9v- qx^^^ 矛DZYyy v*lRժU3{ܿjFRIt)=<<4vXs4iEc9~Fڷoϑ /W/{6m(**1ϟ?7-IPʫxfpݛhٳ[nf-o6m2|ʚ5Ѷ˗/kܹ&LjoXBׯ_o1͛77{̿ 2DF֭X///w6k,~&SM74{ܬYnU g}j^`['%@dɢ ?P͚5%Çk&_j]frs{!wfaÆYtҥKM>јH\2sOOO?qӧOWddqyGFFj֭fDZ3w߿uY9r䈷=W\:zƌ*Upz7ճgO=zTY Zn%K=nڴi={y޻woI QuujnGh ѣG͚;gΜfcUvwĈf/IgVLLEc|_~{}c,aB.]#G}'Oj…*S,͑ZtY``֭[gQUVɓ'/=K>:vh| PbŊZ~4qD(Q"c.[,>7n4{^KW6{LttM[Z5ըQ_wEڵ+u&xwRJ/]dq\YfUjK:t`1-wy֮];eɒŢ@ҡ0[5n8;wN>e˖-Ib>}Z.\Hρ̞7$$侏?̙3GYw9Ң(`0y>dȐ%} :Tk׮5;γgt]YAVe5zh;vL~~~f9y݃m@D `*Uh֬Yy6mڤVZæ1lH޽{+cƌʚ5kx{{+s2d={fQ?6ofTxqcڵK&ҥKokv<<<4j(|_xxVXy>}̎ 샢fͪ:wFiAAA]ry ҥrav˗| <˔)zหW*Zhrz^{qv8EoOOOu>(zYlYY'!%JД)SѣG9֤g3f4hvOj…;}{v,Y$8?oKy{{%hWh?cGnݚ`Mj]t)ޝ@I;K,<./WWW 4(-=~pڴi]Ec͕?~ġf_њ:u͛og- ۢ<}P't>K5jΝ;o|_Bw4Yzz-Cյk_sm޼YZ2-_>>۷Oϟ~}%޽;N>}X^DrUw1 ()+V̤~E`NME#F=&""],>ԏ?_>}ڽU@sRiԨQfwzuX" `0ԧH"]MrI8$XQ$)yxx\rȑCٳg7;v@@@GJѼys/^q͋?ĹW]7k7x<9>%gΜ޽LEFFjɒ%qk yQ'ц/㏊C6x"""WNx\\\TJcĉfK6l_ 6H2~a˖-޹UP!̙\8p3fҦMk֘gj˖-駟tΝҦM]2E(z0eWӭ[n:;d;w9JU V%c`ھ}Ec}zMڵ|||7sLi5h rUcu%… ?~!e3(o|SNfꫯwyޢE ȑ;Nä#ƍ_tСDt)Ѽ}YƦŒuQƍS*UtY驁=?,I:y*U}ltګ;ِ!C,Sox IDAT̙?ƍho^ϟ$O*[n޼ia_~~~5c խ[Ws-ZTڵ3{ܫ"E^z쏢8tҙO>QPPPrƍDRرwNYr$M8Ѥ3{l}W/=ԛoi/[nVkYTx!cƌyYӧOe+W.Oy瞽kƍ/=H>HֽիIc@A IBCCѣGIxӧOO>Znŋ>Sg}f~ &˪_5l0_OV>}z͞=[7o8gԮ];-^8N۔)S,*R<~آ<ý{C㚲KU&LPDDDי3gV…͎oJR˖--&M}Q''O?~\M4I̙3-O>QM]|EL4ɢbbb4h E2eʨ哐bŊyO6guyѣg0ԵkW8q챏?{ァiرj֬i,޽{q}9sqձ-AD]6qM6+s?;6m9sZ/^DrmVztMhʔ)EذaC97n?աCM4)ν>|ԩٳgiK.VXaSXټЮ];UL=|P5kԆ Lk׮]q (`)-.&&&G5cҤIo~z#o޼f9poΝ;3k˗Wƍ-ۧO싢8K^\\r5k?nqgϪaÆC_ʕ+-1|+Vq111?~jժk*,,,N(:tHzoN)SXT 2UUzu2fyKNk֬QƌnZmڴѮ]ӧOm65iDjR```>iҤ-:F޽{jժ._lXI5j(G4Gdd>3M>ݢϟ?׻ᆱ 6\}Uƍ4 jٲ[]rEu6lؠ GqܹTbQƌ71 R l:-R^,?~ 8P)h[nСCocǎD_,YRwz'R``jժ[nY< .ܹsMWPPP=6L_qMqFjʬ15jЁl4ib=W*^rȡhݿ_/^LpNWWW-]T:u2)u9׆ lr|g,YԨQ#UREEU TLeϞݢBBBt1ݹsGGѺut59Sҥ?~e͚U+V4il g<)R;̙3t!SjĈ6E/p"ǎ3wb˧ҥK+_|򒋋BBB#ԩSq5lP+WT֬Ym_@@{=]r&{Zh#&&F%KŋMfi&Iٲe!\\\4sL}&?~&MYϢEԣGƮ]Vm۶qF*Tn'O[oeՎhǎ*W޳g֭kRߌ3ڵk=OR&MGɽ^z_~w5>I>>3 6LnnnVBR}ڽ{5EǏg}f.Ç&ϗ/Zld4iDP۶mu…$#IZxڷoqe/.B iKj:uFOon(xq8wz+WNuQ5k,rI&wﮀ9Ҧrȡ]vi5kV^Z ^/tU>>>&8pݓw_˖-'NhĈJ6m(VG+ >\SLnnn۷>dƏhWWW <8II8ƍy/_>-YDGUʕ%S0ڼynݪ5j,~|4~x]pA/Vm61...8p._qY|ϒy{'ّԀ>}zԩSuyS6?שSTR% ˌ9RwVٲe-.]:mV-[6go4n8ɋo iq8ǏB 9pĈرc;sN::qݻgRӫ|]5kʕ+5~ٳgڸq6oެ;vfԭ[7KEM,Mw=(P@ӧ,X`Ǭ'$$D?l٢={$1%JP޽եKw>秥K/5!W\Y4h6m$6mJϝ;w~vE/p2A&LPcwuYƍp߿P`0(}ʘ1ͫH"I~bbbt=zT/^իWu-EEE)44Tʚ5gϮrʩjժz7hgѣw^˗OSb=_|ʙ3gr(IzN>G*((Hׯ_۷0e͚UYfUΜ9UR%UZU JaWҥKzٳ+w)⿣`Pٲeuԩ8mK_e#JmQSXhzw}g9H]R֯IСCq̙S;vLlQû{VX~(OOdE/8s*""gL[`G111u넇kĉX ,jժiƌ<5jҧO;rOĉuM͝;W>bbbԬY3>r~wkq|׺sKϊ-$y~\ j]矪Y"##վ}{-[LiҤIRu֩M6F۲e˦k׮tGŋUB>sqqy$N`>T)IZj5jR1ۼysm_c U6m|ݻw`SPPKvڥƍÇɔUsۗ/_.XÆ uɗ .)S$Y~ p!@[dwo{ ce˖͎YNJҙ3gmwqqQJt%dɒI&HP``*TG%د\rڹsgnRիСCfK6~Wխ[7 )$(u1т$8qB׽{Y[o=&C ڰa/E/$2~x$WWWm۶rww7ɓzwt{ 8P&/^8{/ )E/$pYM2ETL:tH۶mիyx 7OVݺuum{j+VLVR,YQFѣ*Wɉ;,mذ!NfjѢ"""-^vޭܹs#TΝ;Z|ۧ =}T3fT…UNuA>>>ɝ&#^I(*** nݪVZٳgFK,ݻw+gΜI"C`6lM6)]tF/\u͛v u(|X@ аaCmܸ(z,D `^) / Cr')**Jо}}/Yvޭ9s1K@25k,]zլA }xŋkʝ; 3H](z$VZ:w$)cƌݻ7nB Ç:~֭[;v(&&&޹E/dzi޽jժiʕ+ǏרQk׮xkr'؂W͚5{x ^T|yܹSӧOWڴi9w֭۷o'I);,88X ֣GSN)O<&?ԤIݿh;;3b-\P=$kά$UREW 8#^vq,dɒڻwrmEڵkP0;v,kk ,Xmۦ,Ym?wUݻwi;6l`ql=p&VdddVojUDDD7n^zzw+>|X+W*U1 E/; ygϪVZVYvm-_\ڵSLLLv۷k/=WkUl ՝],պuk >>>>Zl\]y=#77޶m}R{Fe5?QƌU\=}T]tQTT󻹹r2 Zn*VV`ACO8v֩S8:~d4iHO>Qn:e͚&1R^v֡Cyh"9RFqEIO?TAAA/p4,gΜٳѶiӦGu{n={V={9rh>`lSjf͚Zzrm>T*Uty)88XYdU);&NoTB-]T111SmVϟ$կ_p*ŋu]zUTP!URE 0kݻkɒ% TF-[*M4qڏ?={T^=rH(z`0hݺu:u8...jԨ&LU4j޽}TfM.]ZO߿_~ӦMYo P0{Ծ}{ouww_A4whhZn]vY^uRHDppj׮mRK4x` 6Lmۦݻ[g2e}v ^)QH@dd7ogϚ=v&ҤIŋk̙ʐ!ٱZh!˗]t)S8z5ydy.]:Ϙ1—$ 4H޽I1J(~I7nTLL83G^8+QR?XK/ܩ@&L5iҤدկ_?_zbbbt1} 2U@-^XoҥKզM+WN^^^ʕ+*W~~S@@6mj 8G^8+˸L0a&N(I{%sVWCՌ3$sڵkU~}}o߾^z%rjȐ!o$W0 ϮKvz'cFQ6mZmڴ)ނ$̙S?&NoQ_l+:d/'C ڲeԩ|II>l٢u&cVwJ>#ծ];1...7n;C /gX8+QRǿto /)y.\[ 7rHٳ{ _@2s a ^@ @ @j/d>s[ `PLL$z3۹sg _@2r ca^@27n\/B @/dΜ9oF4| 0@E$eΜy--|EGGkŊ86yXXl=ٸq_~eʽ1`OB`0h/ϟϟ+M4 jΝ;Kw83fPttO.777]rE|l٢;w*>lyXXlz Y|G/cΐ!nݪ~;?[2M/^TB|x _THɓGRddҤIիWe˖VWUҠ$W ^3gUjd r2*Yܹ`>}hVK/|^pWI;; p2ǏO%I}~7%vǗD 3WUb`GcǎɓcΒ%~W ^Rgx!sQUR%vוeUX1c/^X={`>gX8֫N/N(xH-̠A7xCTHHN8%J=5lP׮]:v=TJQ,ԏ>(zI`0hСq ^i %r2WztAUVMTT)޽[%K4:&((HuֵI+[lN 03WR7+xC   6L3f̈}o@rs2A...qߺuKΝ;gt\"E{n˗ϢoV3 ^q:^;$b/1V\riϞ=*U@ժUK1cbbԷo_ ^u^;$5sgT_Rھ}|}})x`Z^%vz6_kǎ8B&22R-[ԡC,#W\+00;6l/`&G^8֫N/  Yf>[o%cfׯ,X 2h֭z-Ν;W(P@{Q- ~^pWɋ^P8 I&i'OiӦVz״sxw|]rE7+::ZsUxxgz(z6`04x8;vP8 Q&Lx٣Gk\ݻjѢ֭[w8zq^ oXEkٱ^*W@\B_WDD̙3_Uj,q-;:wlٲ}z֬YO*M4+*X_\|YժUݻwg;`>G_8֫`!AiΜ9(xHLppW .HgG`Ǐǎ/` G_8֫; Wڹs'/)yZj .H"ڼy>|7naÆF򎯂 m+*e`0>x;wRJɘ /dÕ9sfըQC5k&Iʔ)6mڔ䅯>s ^aXR.75wgڱc/)31 TڴiEDDEھ}ѱ80} D 3WR7֫^(xHM테т$KN?4ibѣGjРgQSNQlԋ* GHx!Wڴivx:|䉚6mjQ111;v$ ^%XW0 8p͛EbŊɘ ϫB p IDAT @jzz HC @/d?A)$$ĪyҥKM6ŻѣG5yW%?k͚538z^.a04`͟??vڥ2e$cf#1 ֭-[J*iǎjNKw|iܹܹreUr caJ}A @j/d&Me˖I9w}n;c8p@+WѣhrWUN/&BfժUС^}}a_͛7׎;MV5kThh;&IU\9b+*KLLz%K> r2V:uS˗Ν;-[6ܺuK VDDI/^ݻ[p6^pW_LLzR+GHB&((HM4%IǏGʕK:t0o~(xfr `J(z_kҥ(xHLhh4iwJjԩS}z-M>ݪ8q `r *x.]:3LLLڴi3gΨxڲe"##u1-Z[jwTTb b-֨K"1D{KJWc^(*-5(bIٝ.lly>9-3lg&puu)S&[VhQر...zAdk^` Ȧ) ^Dd1laB#F@׮];j֬cǎ-rvv\rXr|988`ӦM(S^m"[WDdXB Bf~zi/"2Wyɓ'hѢݻvz OOOۮS.^B ˗زe ɓ'cҤIf^`> &i =UȈㄌ*c_B)R^wޅ/d;c""Kze!BA^ŋgEDf2ٕ*U*ץ۴i8 ǫWKWPXR^Y/^DDDDdS2?SVxq9rNhMպuk|RfΜ)ܢE 9HdSXR^Y7^DDDDd3x%Lr Ο?-Zٳg9`H,8eÇzDy^DDDDd  6H24RJ(;kQ(p9($$$ ##e˖76mSm_ؾ};jժ]3gbƌȜi߾=߯#1^Iݻw{F+<<ڵBÇQHIdnXlC/""""z ȒXݻw-[HҤzLj#k۹_...ѿ,X׮] pIB ҥK(Sn/FYs2%KoŜ9s0i$x~}͛g`EDDDDVYkIKKԩSpBtfϞ.cs r㣏>\"[dT_#%%|e .xxx9c-Y-B?Yky5ksx@TT={")))cK*'NF:쌍72"Ғ+SIMMxɓ1k,Y9<:tׯ_Fl^DDDDd27JJ(#G0""c2hܸ1nݺ]6??? 22ك\۫]6qqqq֭?kΝ;ѦM^zeJǏǂ 9s&LbpϟG6m >D% nR^&^DDDDdu4^GEʕpdDDYLFFڷoÇsرcao}љ]xYVR'N@…sԩ;w.Jcjժ 6jժڿ0"fʔ?VZiK׻|XbmY+ЋBi&i/"2WH=[W#G 888 ڵkɓ'# @cعs'n݊'*jsD"KzEY1"""""P(,Ld\\\qF#>Rݻwzejl|խ[zRK?QYte^\̍-M-[W6mxUɓ1k,sqq6*V,c$2'WC/""""8۷D'/"2W6! sIKw .>^2G}5k4^5u*3lc$2W C/""""( Ȓ̛7o_|)XU ))I>l-+seR 6u|DyrЋ,Fzz:2ZlY^DdluBt*̙ :t(TEFF>-ze2eԫW~)f̘-[HHH{|J3gDttmr~X"""""3}bҶ'GFD-OȔ+W'N_räIdi  P+Mdl^;P(?3/2 +ҖB 4IKKC>}_Ix 999aƌ8qʱQQQ(S Jv777=zT{\t kݻIdk^Y 6h Kɾ|+WQ`h(ezE=li ;̎NǣYf*>&MU{yyI&INNF6mj#}Gdk^Y3lڴI㒃'OƬYo޼'O2777ٳ銡?GFD5OȌ5 nҸĉ}ʶoF ԩ6l`И""";udP[Dƚ땵ӧ֬YNիW/.]=>cxyy4www۷Z2"m^>xO/""""2;iiiݻ7#mFXX/"2;|||b ѪU+$$$}ؿn/FBd?q8::GڵkmذAc Ϟ=CuٳgѰaC ȴXPҋBZZzꅽ{Jۼq1+W.FDN!`gg'=OHH~^l5j @ضm5j|u~g}FDDuԩL뫌!s\6lÑ???lڴWKd^Y[nr労mٳ'6lWWW@WFt OɁWzQcEDZ'dJ%tI&q@3xA0Jv{,X@˔)mۢt(Rw^\v PdIbŊ:XkѨ[.?~vnݺ\튯9s`ҤIZ+>>-Zݻwxɰ^\zQRx+Waaa XҥK1bxxxhܸ1mۆ}ʲի1|p{f͚ؽ{7} ҂5+k7xÇZU˗s5j9Db2o7֮]gHHH@ٲeQ^= 4ڵQЋLZZz`i/"2W!_ܿh߾=;whDEʕq-Z.XQZ5uۗL撆퓶1""se2BtAkĉhڴGFDze 6m!C`С? "##agg'{_x)^}b۶ms{{{m 63]իW1Oz%"""",-LȬ^Z x 6 ϟG|82뒒!D/#җ-+Kh"ٳCJ˕+g x{/9_ ȔXۥK}vy5aT^]帤$,YfBRRRvѱc< t[H)))ԩJ̒-L( ̜9P^=^QlYݼyǏ70رcѪUM/[W޽{6m@T/ĪUPreٳgF?} ȔXߒ%K{ 6mNxok? .vj ;f! ȨRRRХK>|XږxyyyȈ 3g`޽9s&D 0vxôhCΝ!@jpAGzehٲ%?+VƍqIDGGh@Uw^dJWOE˗(]4.^"Ez^bb"zPbC^DDDDd4 ȒpB ׫ ;v4,Ǔ'OPfM|RP^YKbĈjի1x`|򈈈L|) 888=zooou0`IKKC߾}{nM|qyC""""2d^Dd18!t?w) |L<2ˠT*g^-ZΝ;y42W!""?JvI1Riܺu <L|=|e˖=../_>[m6tM~S/uЋd.X"/"2KQk׮Mׯ_m۶ٖ"`ܹ Sh޼9]fQ5` JC AbbV_x֭3XRSS1h !xI^QF!** ;wFHH<<<>˧s{NNN_ B>5aEDDDD 9"mX"='dryŗ+)) ;vdſӧxGիY+o|Ep^dRWm޽سg͛7޽;n޼ ͛7zoumTV 6mһxO/""""Mrr2:wGJxℌrǗ;ۇ?#3/[,XBz<2tWnݺpv튱cǢ\ry&~/e˖8x d/3"`2oɨZ*>|C\\>}ŋGzz:=̙za߾}ߐUŋ@^DDDD$ MWXXJ,##"Κ'dӱc8q(VڵkΝ;\M&BbÆ jϟ!!!hܸT1ժUѣ 0'CK.Etts .CN:*YkWӧ_|]ٳgcԩ6l/_n!Gؼy_l%Ɲ;wвeKDEE_@:t 40Xk"U ^Y-[O>ԯ_͛7j*( 7nA}yu͛7UkbEDZ'dbbbЬY3/ B޽WXv-s<ܖG۶mqỶ+nXh}B|7M<*"^t IDATY+&O3g`ԨQXlv'''lٲCR4oWWWtIe1zΒ^#kIMMEqm@ձg %%h۶cpfhB~^1Xk"F ݗ|ZJJRUDzeY.]={`ٰw ۶m|YFƍ,7$""""d^ǎe^%JȻV?={ETT--Z`(]t^Hv7 'OָsرcKe˖ԡze*AAAׯ,QN}ڷo/;Q_:u¦M?~iìފDrbr/uk$''v/HDDDD6%)) ;vT j֬HW~p<ӦMS{M9D"„̉'hӦ|+h ݋^z!--M6.]K~~~ h۶-Μ9c⑑1B2w v͛KW˖-~z>}ºu0x`xzzVpp04i'Oh1c`aEzeN9m4QdIqѤI:yHLLB$ڵk% 3xbѧOQn]QH1gϞa l^[% 3,ׯ/vZǦUV2ea͚5EBB6l4}pttosZѣG (O^O|A___[շo_# 1g)J*#4cEDDDDdle3gtؿt% dpTzEʕ1tP}_3fDNٍ7_{Z2MW… z}aoo/ &|-}HcZ!xQ`2_GŊ3Y:tqqqZ[bŊ"**ȿzx|hU/ &J,)֭[p'd&NvB BTf;~RPls V*Ν+>Si[۶mMRΝ;wT&|}}ʕ+T~A*Tze ^w;v^~WϞ=u8>gggq֭\9y$/WkYzojԨ!F!ƌ#>QV-é*UkկW^^B0"""",^111y=4Rpp}gy=" cǎihx IIIK?6=z$˧$… 2NrrZ;OT{׫WĸqT`el^|nݺ&b ,UO< ѣAc}h۶ƿ~x~rr]6/29+u.ŋ`ݾ}[L4Ixxxdɒ޽{Z[eB؃@||<>cKjժÇpy82둔+WbܹqQ ~0JD`ʔ)={ hٲeJ>7n͛7k֬4hm{QHm6l@TT\\\W_iL29rVc+V ձ?3f={ƍ/_>,XÇ8mΝ3ʘI^LׯQZ5߸w_ܹsuԑźut#88B@R o>L۷#::Z۷oŋ`h i|EGGk׮HJJBҥqitQ~~~3g߿ѣGAcO>\Э[7̟?_+",, %KS^nDDDDxݺuK$e'Xzu'2y s!~(^(QEDD___wPD qΝlU^]]tɵgϞ GG ȫ )\\\J{hdzY&@bB>o߾SNjUT~ʕ:ѣG2nR)zowʕ9?ɰ^Ç /^ܰ0Qt?uMJ(Sʹ*U2+21""""qqqqaÆ (>>^TPA\ZJ־21˜~왬gn4ɫ 5kGGG}vxÇxzjtR5;`?߃+K.W=M2E=3F6D|YN /==]dddhe-[S 4ڵÕ$%%:udСl}ʼݽ{WmƍӫѸq͛7kVr6ٴxkgΜծ]K,00wUoųgd+%%E9##6lLk֬A?6QNjɝ9Ocs j ϟ?'f͚믿ƍ7p1,_v۷/ .m oÓ'O>ɋKctըQ~< iiiZ pٖvssCPP<<7d_6.oooO*۞>}S(G$7+'dooʕ+V|sN|jغuk0""""Ih۶-2 渿XbUzu瑑Xpl@ѢE%11Qֶ 9s* UTGrr2\"=vZ!pYxzzt ˗WhQh|޼yq_|ƍ>۷ԩSqq9.\ƍ 55Us˔)|̙]CUU*^O +åcҤI(\0+ٳg/K.!22Πl2:UN_l@׮]:zٜٳҶЫ H ͚5Syf͚ٶ͜9ϟKDEE.:y9!w/OePQQQBH7lؐMvޞry7n0vuoVXj.\(}O?xصkW3FNdd$V\y*ϯ\y4駟l+Plip ]tO?W^!::SLA׮]e n޼)پt8o-\0s6mڔc? ӪU"V^VT\Y(JYٷo_{-KۙN8idmzY1M׫Wzhg ѫW/#GLprrZ}%"tQNiB)S|~Me'k8{ptt*zիWe闲k߾}߹l_@ӧ UOIIReNʒxϟ!V%DRRRgddڵkkvvvbΜ9zqd' |j8wl}sݵkΩSd5ee\xe}x{{˗~e.+@̝;WsF֭[G ł ֭[O$c2_|)ѣG*V/6x\%KD6m n+1""""" ,BM4qRE+%%Eߋ!;;;Փ- 9A bذaFkDD8w]ׯE``С(Z...aÆok7oVJ*%o#Qxqf9z״ >ʕ+'VXq)fmIm-ZT$$$zR'NzE}cƌ믿eʔWPA<~ؠA+u}QN_X&N}Ϟ=+a0' īWDݺuU>6nܘdZM|)J1}t'2NNNbFzdmBwxTV^ĤIĺuDxx,KӮYFyvlBH"Kooox 9QFiGEEGx$^}wïUV~mVj履~ WWW@|z_I-+T*G}$J*|ĈbŊXg+JѲeK@ԨQhK4C/""""+"ZO| !ĕ+WDÆ uV8{^5*k1$ʜl(Zhڴ2d;wصkyf~oV]t%ۄǏM<ݶm[G.X s .,&N(>| i&i"핸Ν3ůXk2w h]n_^ZܹsN;{ݻ<^/>{dtWmժU*>ׁ}5G?Zy5 e%^DDDDu>|.Oծ];ѽ{weKTcǎ 777m;99͛?4$+k7V^WtA3F͛7?CL4IZNIa^*J.;tO uB ݻwgkwذaZogg'|}}E^D߾}EN >FgT^z%\u֕~"44To^ ty?CB={ژ={ίĉ?9֪UK y V+ ~>?_IMMGΙ2e/l !"EGGuָr労q Apd/_bȑ9s&ʗ/HOOǢE0i$( 7o^fdd͛|2ooo4jzW IDAT"uLٳgK-[7m4̜9S>_~%JFn݊ݻ}kƍptt1bIs׮]Xx1bbb4SN~6RcѢE?bp[?RL%,, ֭òemױpBlܸ:]|yЍ7PF ( *T}4*UO>U{%K z;t+9y6mV{Q\9|r>}y3g4+#2+cǎ\>W|m޼{ 4^戡be>v-M,k|ݹsG*UJ\zU~__dluBF%K޽{ ???0^˺tڵks=^PVZxe:~ GGG1a.?kZׯs]YӣB b9} jѦMƪo={L5uT̚5K>GGGT\Yk֬slѣGz*\˗/̙3x𡴟2޽{ѥK̙31e[j o%J+W;w ""BzгgOAcGPP5j ,,̠ϧ)))(_<>} [n_+VĔ)SЯ_?888H۟={___$%%@xNɈ 4hlق~=z@jj*vڅ@=G{|esv܉K.ݻx9 .bŊN:h߾=6m '''C_yԍlWx5mڔWx7oވRJ\a_K.1u[м,~YTC &ӧ+Whu^NKf>˧['O^28qܽ{VxB.]Z+'22Rl޼Y㣣EZ|C/""""3)~2GIl޼Y+SnWnDRRtR-~ M(uB&...-M>]vr u׳gDɒ%xOzO~7u W.ǐIOZL-keoo/n޼)k .ޣ}b̙p:_UT[lf͒M>|(ǰIG\L)k@GRR(^ *v؄/HWȺXELL쯁W$ʖ-u NiK~ ̔Yf ذa4JŊe2|`E`2^...m͘1C6H>*U '''lxerttϚ)xFY"pR9{+%%E,YDx{{~e>&L zw-[=zQ@s_^z뒟x䉡/f؃΋/ЪU+\vM֬Y3߿^rq_DDŒoݺu zln:888e,dLٳgK-[T{ァoXbZ9sLL2E |'رcG}=z4NhҚ+S B~=..}d﫯jl`` bbb#Fݻ?s_K,AttA%U8pʿ7n1B(PBÆ u!!!UwG}Oܶ-bEDDDdf?VZ6^^tAAAFnݺcx1„ݻwq!̘1CcȐ!:MfiۥKЮ];ڵ+uꏴӠA7nĖ-[x@BB3Gӱze LG/z˗/ҧ;Əx5͛NNN4hnܸ͛7zZ4^"C^=l[p|FxB Ç5_nBV_7oD&Md//5#"""y}͛sIC 7dQ︤!.P(TϚ5Kz֭BT*ŋEƍȑ#ׯ/Qzu:dkE_ӧOӦMS=z4{*&M$J*eВ^*=~xC^iZ땩[0G֭ٳg ;11Q+VLZT*Ş={|՘ 3>Xٳgֈ-[,ix_b߾}"==ٻ(m. ҤXbcbA-F5j~1jbѠhL4jƆX5*"X( /ceβpkKvfΜqq3gիW/7o\~X2^f"!!kԨ˂UVMx=c˘ӧxA(2r;uǏ^ߘݝrQ)Be?<ժUXme2msJ*x13ZJhoɒ%z > ^hj̙3ܹsgE ze^d2/^\e;ʡ)Rxyyq||}D LixVZaJC r!$S FEE̘Sfdd()  ;;:ttB'N "eI$WT9lllhǎԣGJHH ???ɳ.22wP;Rک*{ה9իhVU~46;F7vŋjcܸq³V^Z-ZЁٳԹsgu}ʕW?ze~R)yyy\߹sg4TET'NP2eDE2uPة+--] R~}~32pذa񕝝k׮~M/0|r\\+VL܎?׭[}3f ِ!C4^믿* vssSAw[N}5djLP ׫$˹m۶LD'_~ .ӧ?u痿??^B#F0\=z`T&M2}m^=zeƬIII S\+[pW^LHYծ];NOO7u@sb_r7oŊ㤤$u۷oGFUd:uwޞ;5j`GGG4jRRRH222^{~{ T .ԹΞ9sF! Pv\GAW---gϞͯ_γ.==ׯ_իW+СGDDhݗa1o߾ `+++&"H$uVQPʼ bccgv!B/yBB&E /&"6mGFFSd6lؠw^xBhhK$?B \J<ﱲgiӦj4h/_֪/ׯE ZYYիW>e_̕L&{rƍ ڵkgΜX .8p>|8 yd\D j1c"B/xYa۷o3g"b32׬Y DWd 0 lZCuKxxQ~~~ Sk֭k.Gy۶m,նyMgiZӓZ6M6FPꕥq]~v޼y¶}dz0eh߾x]x?ܹm+99SSS0`/ G,ÇyŊܫW/c___8p XoܸUb_ׯ__lTa y r*% qF^ ĉ`n͛\|y([j֬ϟ?+ ʒEGG44JNNf777&k׮tze9BCCUr|EGG+wy]|Yx+/ '?^ӧ]u1!777̏(D0 r93F޽{MMqx(ɸUVJߣ!CQ͛ӧD;6+;ްaƻk-u;w9sL}JJ^L_Zjyꌳ3CСCLD\t<{Πs,lzQ\\0uVұcG^`ҤI{VlY^x1;w\5J+wڥ8ASeWBd "E0OMѺ}MСC˜m۶֗-[T* ںu+)SFaM2^J: ̬YhFr;w\.71ݻGVNJ'N`ڻw/xB$ /4rH322[ntĉ<233iȐ!`RUXʖ-r}Rn$Hhȑ#JOOOF>r)11Qi^x{{ƍ)22ƏOvvvz*۷/zd_PJJ uЁvMŋga^۠A?T~[HHy{{իiŊԠA:~~Ȑ!deeفԩ@A Wq/ r 8p徒S;#GUxϞ=+Qa+O<7ݻw`ݺu7j(YveFv` 5k0ŋ!x֬Y\hQرcZKSED666gQl߾]!⠠ j>}`@^JNN`.S[P^,Ⴈ޽{1mۦԬ666||WA  3Çr45a@WquѨQH.4a6mɉ^|w_e)wN:ENDDTH}мyݻw:M)SL&ӸD"UVѨQTnFDDdccC;v=zܷ?7nз~KǏsu҅ǎkhf)wrOiԨQvM63.\rи8>}z~([ڴiu=dr:!?~hipЌ^eo>^hX>M2%ߏs6m{yy+7i҄.]ʩޗbbb^,?֯_14bf`"bz d^DĥJ2I&1D"ᐐy)_c5h qu^"ҥ ;88xe4_C Q|effWD fz-+ˑ^n]#:%___zZ@@޽SZF)|UZxS>ʕ+GL};w׸8;w(]?yd RM6tY^ ͛7yޞ8@mڴ8/y3ԸqcQL]zer.^+V=xJ,)qH[l ҥKDkt;K[FU1B:////vLaP[z@… BUBڱcy^|I'OTѣG~F9pVTɠVPΜ9C͛7W~͚5 $[v-/>|XiEDN]t'Nr͛ÇNoq^h rY[[S͚5^KIII&z;>|(j^P֭)&&FxW^ mR:u>mڴISRR`kݺmb25jԠ:uPfҥKԷo_a֭[Ӊ'TR-"]'!9tEg777:zhgQxx} FݻwGeT=zmөk׮Muؼys #;;;۸ѣGx9+m^Y ym˖-"1%SuU>0bŊQxx88q><===iƍϗ/_0.h߾}T\<٣r?[[[ڻw//̜9SeVZѲe˔sqqpjذ(ER4A*W_ѱc n?%%_""*Qzjٲ/EL=hтn߾U{_Ν;Ӂ $6 D*wЁ+tυ?_pAN*QLڳgY[[+~eюiժUDDTjU $__tA1%J/_/czUqF/+RЀtn7>>իGDDD'O6?^"___a:"^ٳ4m4w}8::җ_~Is%777{ 5nܘ5nDM4%J==xnܸA YYYѾ}sڝ> 3j(Zf խ[]tzяըQ޽+ܱcG:|hz /_^b())/.1@ 6lm۶Mz1K. Be\^ :\.UԮ];:~m+ R222ϏΝ;GD.2eʈgB/\T^۶mS9 ЬY4NVT)ڰauQv߿OC !Hhʕ4f 2ӧL2TjUx UZ޽{GDDw咽={jߟ9ږ-[zcC?S9==]3Ә1chJӁM6:[jܸ1LԨQ#C J^f۷oM6њ5k0+󖖖FG7o5ժUNۃnݺj^:OԥKm=~zAW^^4hmذA``fﳻ;Ӈ߿oꮁ'NTx4-ּf7o\c|aĿ(l;c|ĉ|;ɓ'ޞ9&&Ơ&O,G:uX.kg̙y~ҥK%k m.N&СCUX>u:uE; ӱcG&"vssM6;&ze޽{sŋ+Vl e+++5jīVDyߋp%[n^!D"˗k},^x1;88|J*qDD1LNEDܬY30/^p>vҸו.]>|hPr?~\h~YXi;;w"2"S+Ntt4(Wݻw甔Sw ߠ^ԩ{W˖-9))I6WXwŋW+믿o H^*11;WWWyvvv_111~1^zrw 6T;͏?6uW|;vL^9KjѣGZRT- j0>^PEFFrٲe>1c{7l0~6AAA*\\\'?ޠ ,c"b[[[>tmr۷o9<<>fW^||} $+󕚚*\$HxʕJ;u+WNK.ƍAZ^[ .x ^P *Oe7n bUч>|X^ZhK+ggϞq`I&ZߙveЮ]zJΝ;^}]H|yNj3 4ҥ0 SwdPӅf޼yj}1רQC+ǵkxܹܽ{wd\2wܙgQ ƃ eLfkkw g},ZHЍd^VZ֯,RM4#˹o߾immͣGXmm߾͛7War%w˸L]@Ç3ч;`/^h yKJJ>GtA;MVZ_`$PDFFR6m(>>^x?͛7 {X|9M8ʗ/Oqqq:Mj۶-|2:DB˖- &U̚5ϟ/looOaaa/?uPzz?@>>>{˖-4p@"";w}222hgK$_>k׎ʖ-Kŋ8ڳg]tʔ)CN"OOOΥMҋ/ʊjԨAo߾*֖KW.S>}T...N 648hm۶t j۶-;v0 +O?7|C\BjDPٵbŊt)OE-%Sn޽{\L;,޽{XbzTz@Y;T-[Gd2]ކ oƌz;gU?|PVԼysu3{*mmmŋ}<˸L]@;Ϟ=UlM޽2u:uJx~wKHHEqժU5!Cpff@W!==L20MXbLDsiҤƟӾp-T-%K1cf|5mW>1u9rDxڷoq/H":hze9UI$yѢEݠޙӣGa5k_߫WK.S{ëWbJC#2ud2nѢ^uƆ |Ҷo޼Ç?W+++:Ŀ;?^6˖-SZo߮L&#0a޺uK &M$i CY5tP^… ɓLN:53g/0:SȜg󌯬,ްa(Q5k 3'TV899Ys.LBzeTQvmNMMU^&oƁrJ~}6A7of"RJqݺuu/9 wMa 8p 駜>aaaloorcǎZcӦM}U 9q{SfM~(&&& w{yyy ak׸W^.t"^n^x[r>v3vݻwÇowѻSLa"L|dz}G͔֩sV\|Y(ȑ#9X^`^Æ Ce޽Ea'''>{,337nܘ_s{gΜ-t=2/"Zj~J#Gd[[[ïF ?+WN;9Aw.\`;;;Xw|5J?g^z|H!!!yi„ Z33]tIɓ'vZq:JPn(##R)U\Yu}xٿt֍Onq@5S+PѣGԨQ#Zv-ry=m޼ᆪwމ/LFÇ,>|8& IDATى~ ^Yw n޼9]pY`N?~n̴f""j׮ e @W˲mݺUt9T߿mkחٵknZ|W |ǵSgٳgA܍s1c+VL;˖- $;;LJo߾̏?z펯L.@//<Z*yFӄ^z?r\+?^>3~hdHNN2eʨ 4/ggg:t(Ϙ1[h!ߋt2ܿӧOϟv܉' /[L}^~sŋ4xsA{gϞI&qff^2jUjU!lۿ|988[ =-P f͚xh ::Z-Z͛7CiWrr2Khwb:@Wƍx stt4O2Expl"( F/v&Mh0buN>^`,5Ǐ7xz m:6lP|U,]Ԡ#S'''u Rh&˅; ?]Ɲ;wV[?۷opG3sBB0p{)^8תUkԨ666 ٹsN@ze-Z_9uHGuq?.\PyCfdܴiSsssS|yatyf^L= cH{#G'OG||^ܼ,ްapdɒWi&+X; ֹ?S+ΥKՕܹ۷oW;u쓒~~~*f͚+P!_^H.ZhZ1a^z5;%\.ZjiCW *24 c&".R/Z߽{!!!,JU~94iׯ/7dDvvv#G>}^ᗝw~}B;s1ܶouQ%K~aֻwo&, }ꢦjժ+3e}Ndz[|ڀJ=իp@Y蝙_|} ʼd2g%ݛ% iF޺uK=<==իWB/0[׮]CU@s=رcy֭_^m5yd}^/e@ɓ'x ;88Dʦ#[`(m 2_^K.rJ *U(A@fe{X933SvmVVV*$M8Qf͚ܷo_LAə2PyڌU107ol1$z x;wN[ ׯ_k|+WD9VVV{zz L *ltٳ':t(Ϻkr"ETֲƍszzAP^͛ YZh/^й=1JJJ^ 0 sU TB+[R)Ӈ###=zB}u ._a!WFFK㸻+YsWŬY41B mרQCzN8VVV:}z1o۶M<(\l۷/Q SxM8-KWL*U׵kpWUa}6 S~ݻw?sů >NNNLN;ooon֬׮][PӐK.+zu҅E>k`.|ʒ|+ʊo޼)1NhB ԫ5g/(ʊ7oެq4_&L~B/0+W PS,ڵks]!*^^SVi a%K5Eadz^ݻwyȑ 2) ۷ogkkkJ77o֧۹uF cccWZŅ;v4A\,ь35koߊrܦM )SF6tz޾}*T:Zp~Zc! Lʕ+'7yd='NL&S&%%9ƍzqTT^|9::rxk : |s9sf -ZPΝ;>ɓ\H*\ڷo) rdeezeY?Ύleeŝ;w_~DcΝE]0`fE8ݠ^ի.zj9;; t>֑#G>S KtVX! XrLMWdJMM`.WNWo߾>˗D[li_`}JDD^^^t *S}Gj@p9^$ˉtRA3gΤ&MUX*WLTzu߅UH$ԴiSx1`@0-Zеk(00PٳgCxx8YYYx͝;f͚t]vv6ߟve10WC@@޽lmm?py{{ɓ'5{:txUPA9P̏T*%///;w,zEDԩS'ڳgʚk:u/ B/0sΑB5e^ѣGŸ_x>IIIxb-Z… ŋݽ{hԣGW8g^` g}F7n{Ul=zN:+nܸADBwܹsiJ!2-+ˑMoߦӱc… 3|=}h*xk׎oNDرt;!^RJ\e )ʁ tf`'''9LbnfϞ-Vyذalggӳ} ۷^`TxƄq_y~}||Lݽ#w=y(md2ݻ7eFP_jj*]u癶988++Kv5=+͛ѣm6ꫯC%F[@2wm۶͘޶mzyyq||ю B/0^ˮ]` zjnٲAˎ;ah0 ^xsJNDdnR(T_r!2W...:}}iu m/uK͚5F@2w2K(U0f D#af&EDD? M:,Yb^!222t TVR*VHO퐐6lVVVqF߿(ǃPӧOo߾t9Сmٲ/vR^4N [boɓ'};vk׎iܸq?|IhҥcxbgLi@CUp-\fΜiPգ!CPvjժLvoVnݢڵke0 Mf8rHEi;++J,pǬi֭ԧOQ ʼEGGS6m(..>L-[$gggJHHzVmUR8@^^^j&j߾=թS7oN;vEjRzB :tprvvpjԨmذ*z𕒒BRٳ'ݻxLz8gϞ3iL-ɻwQF:O#J[n?l;::˔)#v|<3(0DӦMS2M~~~3T}Lu(>+^:[YYo߾ͳyʔ)󓛛ߺuK5MuئMNJJ2i(ze9BCCՕ/^Kxuhvuu3gYw ^ 3g *={ժU*첷篾JgF>|XwٲeF>(0 Uhhhz\xqnݺ1qǎy֬YydܱcG&"P<'Lj?K-[N߾})--~'51cU^ShL/0‡Y(={FDTlYjԨf:uSN2P,X<תUTuy{{Q`` mܸfϞMS5P``N*U ^L򢘘jٲ%>}Z6޼yC{Ǐ&((HeM'ORҥu':WX:|0uԉJ.MO>U233UB9{,SZZ/:z(=z"""(%%ҨhѢ駟P׮]>pXl_ҥK%$$A : Ȁ%bfTJIIUR DժUW^Y駟ÇK~d2SNъ+믿֫ Jcǎ%kkkڲe QժU֭[J.P,ßIC%"-[Pj'++ wxT;!{GAJP@~"x$(Cӣ ( M"R DE@z( ded$3ILk3$3뙽s/_>j@ٺ"h?CpXpñc̋/hrֹ"""L.]̱co߾Fҥ_sL 8p Xb)r%̓O>i-[f\n?mڴTzv$+8 5GMu%44̟?듕QG-N;vj$o73p@J_+Wvy5k֘ Cz\}P2כVZ.TPtl޼٭=W^}hzKhxSy=)y%wf&!!繾;bo1!@ozmܪ5k46lp9Vddd۶i޽;</^4mڴqڮk׮>ޝ QǕ+W@tA;d7UThw_1& :Ԅ`3uT<dmԫ믿2O<۟7on>ӧOOk2ӧMBc M/` 9szj֬Mu_Ǐ7qqqng[nɟ?d:ug Uz\CBB;c_4ޙ3gLJ6G'4 ,0Ν3$?ȑ#ft)9ӧ / W%&&k>s^V5+fLٲe$^p*Zvˣӻ(P|g鎟^3_x4jqR%J^#^ݱ~o̜9ѸtٱcyL*UPTZ5sٿLɒ%Ͱa$%%':oJ2Ok SD kMdd4+W6AAAu .8~Sxq'r̙"_|sIJJ2< /PϪU&O쳱] ?k|ݸWעEK.mZje5jhnwܹsK'ߐ^RJf„ fԨQ)|ӿ z5]n[v- 5{ldiӦMͥK|%KL@TfG|=zԩqaڷooNj~Ws1ᅴ)Sڵk;WJ- Ä 8*,,|)0 IDAT=xy]gϞ)Lrn|vd꣏>&U`3gӿ?OgϞ_ݻk|W?;/Z;wnՖ-[ZG|/^<ŗs_B+p;vΝH2 4pZj"-?Lwr]v)={y衇'Oӯ_?CÄ IRRcٟPߦ.[t]wǏɓE]G?UZzӿ]PP>z.wwQ2;*Qٿ={ qf͚W_O_aaaf͚5>x@hzt]4qcw,c…fӦM~_4Lܹ=+[lI3fʕ+e˖nS^=3asY_}o= ɬZ^jz)+U涇r,ub 7T7/SسgO9uꔹ;-\b"##9++{?qTcPPPG|}w&44HKD;vf͚}QHH>}Okٰa/znw9s[l߾ݱ7GxGzV\-[*..q5`SeMpyeTF Ƿz6nhy{W4c ܽ{~T۵vZ͛7O=Os!{d[lq|ڵti5nX{ŋ>*"""|rrze+::ZdI 6UV~WJr͛PB)nOHHPΝΝ$+V̫#8[Wׯ_O111jԨ.]^1cMm\B2+WUVN A6ʚΟ?SNzѣGzox>}ڣ% թS'͛W۷Wxx׹H3|7&gΜNKX 4XYʱc;c~#GḺt$$$x-[8ѤI>M6x;w>!)),Y4iZ?n6l^yǘ毿rSNի[^2|U^XCvXƌ3+VL\֭͹s<Xb1-ez彸83ydSf4Bxxy_ݺuOJJr۷"EX^0<88ٳǧW_uGÆ }:>&d۷o76{nN;)mcq#<3oK>Ku2]7ovٳ 8]nvn:\| 1oS=X?ܣzSܾg;wN ҪUTD ƽƍ+I*Yqo;wN6mryŋh߾}}Scƌh̛jJ?xKk׮Z~J*e9wΜ9G}T;wtVxq :T111ѣGuEm߾]/rʕK,ѯQ2eʨI&)nߺu_O ,PXXGlʕy/Lx:tݱ]:^:uK.ٖˎozɓ'W'LW^1Ԁ3 _KHH0:tpjݺuGSƚ-[ '|>,\дnڄ{dD…ڵkFdFHJJ2_|T[Gxlذ^yܹsVZ/88ؼ[?Zjiւ?<_~e5m$%%Y}իNuǀB l-\-[֜:uHo߾~ 53fjI{QboLג?4I5rHOLL4Zu|rꫩYvms K.Zi[^yo߾}{==EC GiӦ~m|ݰuVӧOӨQ#SlYd (`*UdZnm>cs H;/^4<˗/YrG?>MRRW^c>&>>q1_~q"""LllqB ld4Ne䃏??ޔ,Y2 {ҷ9&f͚Θ?+ߩ3{lHLL4QQQ.kjb,:w)^x\rYɓ1F5,eCW=z sy5m۶5|Yz_̆ ̒%Kȑ#MTT)RbfΜ~ndƎk1f~k|%&&Zw\b~aWpauVǹpႹ]֝R_~%#%z]vy4q$]Zd$^d^ݱ!C8^H8Yti_nkTHeRĄ o*sZҥKxx\p NsSNNX]~|G&$$ir6zk׮.]8^9s7|Ӝ>}:m_n-Zd6mꘔ NsO?ԩu_'O47h;W޹vѡ 6r)MZ[/9s47'OLsXK/}ٿl@F@6|C[W^u:o;R Æ K$SD 駟:iذdrՇ=VLF{勣Sunݺ^9ttkuPPyG͈#?~s1vZ˧橧"%n>j%,jG5.?ݗ$SNS@sY +$%%g}͛'{W^<^bbi۶m%SO=e&Ol~W~sQb ӷo_> d^dq˖-Kz펕̝;(P\vH.<{5'N4Ǐw>11,_y睎 onń aڴiNGPEEEl *8>^ٽ{wj5k/f+7-[>|1=ܗ'Io߾^eENRRٳ_DzGy?ޫ1\b4h<; dY˗/WTT^*I ȑ#էOe= .t|})4448qmۦnݺtҪP|A5lPe˖?={(88XcǎO<ɑ 0@v\ϕ+.] ژ nz畔nsޤI?~  Ǫ^/_Ʉd+:tbcc%It42_5nԞ'Ojǎ>zc^u92}s~n~/q|^?\C Qxx1[oiܹ :Qhz-[^4c޽:zlLRbbt[ǎguU9rp{P ڱc6mȦ?Ç_~.\gU޼y}2$mV;wT.]<:zPB8q/^,WrUM:UTdI6'Jm۶9~>t/^yF#nXd>cϒz"%DGG+&&F5hۊ+jŊ<d v|oҥNܼy;ݻݑ6mdSP!s ٿ2d^CBBL {g=Y瘀?z. z_7Ιl۷ה,Y2ZÇ .-GvF/:8izDVk֬֨Wqu/WP`o (`$%Kx=6nh^x7o^'w&22̜93ӞpGϺgv˖-S6m5jTlVr1'N)IJKÇkҤI)sw(::ZѺtݫC)w*Qʕ+R2!#n5h IJh>|X֭$uE+V4;*TCjС:|bbb_)_|*RjժE#o}k֬i_7 0@o={ŋׯ}%IaaaV^yˤsͮ_zJgVTT-[qKvڪ]&N{*&&FW\Q|TD ժU˯2J1xo…zꩧtuI42R5d5kj˖-6&r֬Y3-[L!!!ڽ{#L/f̘{έZ,-۷ٳU`AܹSԦM{nJLLtL$vmZ|A̙}?VHHO)SFeʔQڵUV-=>} FG(#l= Ă LhhR+G;Vѷo_e!S]J7/اO,>}Ӓ{y'LZ]H:;wcbŊEB }sι8Stifk奁WQ^=k_tDxbir̙cDDݻKaj9Ps뒆7xoӦ?ᗥ{ؾ\r&wnՕ˛cǺݻMHHM\\(4pO3fݱ[?`J2'O;ŋMXX`'&d_|UZ5qFSP!5~t'ӻ.\،1\z5;F޽P'22'c? >q=qfӦMn|z7|5ouϟߜ9sXv2=z0yo1""|WnƗz@ZWtRnMbb̛7'e˖;'Zjŋz뭷$Ӻuk^'  @͛7/Ekرvʖ/^GPPٺu ȶtR bBvرT'nuaKGs)S1b9y>]fڴixܥK|5M05n׮[G*ܐdkӤI䳿Ðl22?>IP3tP/6o>׏?u>Kٳgwm$'c@@?:tׯK4f IDATNpoÇ\r)noٲ/^\X`ڶm+I;v,+PTոqc>}: 0@t\?r6lHʕ+zK 6T"EtI=zT?,Y;vXʗ#GYrȡUViΝϞ= Z^e WwѣG+$$$Ξ=Yfi̘1ڽ{_͛W?Vco߮h}~ռys_իWK>}ϟ+!!!DŽjيrܶn:EFFҥKڵk_W:u?SGʕ+tR=zRիWOժUӕ+Wxb#7o27W_57˗/OEXvZӮ];}qiذaFOy!b޼yСc ;V=z9'|}O?> N=#I7n7;A pwׯիW+44}?^|F+Wx}uV/_m6}{9M6o?ׯ4i 1O?+xy|-)UR7{W5zhK:xy}JJJxUjƍ PW׬YTϱkر?,Y$A^VZ"EI&YjxIMCjϞ=z#G-R^~@ׄ Tn]۸qcYq^R.]hN/^̙3} z9mVfJ|7.\XÆ kƌںu{1ZVXA M/2shxqN;zϞ=` Llݺu`կ_߶A ҥKkҤIQ-O?v^ΨW˙3-Z뱂ԼysmذA~~aoذ{=y UnZd^VXUVVZ>ETT) 7 ;w~ ݻۜ nݺD>ѻwo}ᇎ7nHK؆zxh|_)&&F.\PrTF ^dB^ǏWnlN?~\w}Ο?cBBB4i$uyhʕ+>kw1!$&&F5Jջwo͠TUwY͘1C˖-Ν;ui\rUV-EEEf͚~I/:ZJ5˾Pׂ Ծ}TW9rh̙j߾}&^d2s3}Ժu4 ƒ%K4h IRկ_?2ȪUիW{d|>}dP*xzw:uzѢEmJz5iFfRhhhILLTǎ5k֬ Ld}4$nmxkԩ1'((HSLQ }ŋu_~iQڵSbb$i*]O du4ğ_|Qw}FӧOMBBf͚jժw4J*e˖)^=&d]UVMVroz\\:tu:"nA1FT^=5nDWC6m_($$$$&&SNZ`A9vZhG#&%7j֬YرAqڜ {nh<ȑ#իzHUTQbŔ+W.]pA֭[W_8z7ߨbŊ|Ƙ۷q~!7|࿿gcuQ۶mS2eo>;rDK.U%IaaaڴiWns*duԫgj߾㋯j)kuEǏ?e˖;.hz`iӦK.JJJD+UΝh"N:ZhJ, `BU]vʙ3֭[%K(11Q ʕ+ugPz}c||Tn]ǗƎ_~'c^e_4$qƊT\\kOʕ+ ^dS^pjxM:U;v92̙3̙3>788X={￯0  5jȭ%ho?~}7Seoԫ#))I/ .>׶mԺuk:tHԳgO9GIרWX`ڷoׯ{ݝwީ5k֨TR~J>^d ^̙3:tƏxǫUƍD*|  u6hx*{9scG}TSLQٲe=#!!A'Oo+WHz1c8-M  7xO?U׮]ix5i$M<~z7լY3-HLpE˿Wُ1FСC%Iѣw *d :Tv풔w;ިW/^hz\5Mgyd,1ڲe/_M6i:~PrTV-=j޼J.mcjdULۼy4h8s/^eocƌkx-I˗Wu=@ʟ?N8C)&&F?rB4uThŽlzL>];wN~x~F V%%%… ו?~!;`B\5dȐ/$iѢEСEUhQ?Q{L+WLq; /s,R 7U`A,X225M6Mq /^֭[kɒ%ʛ7o͑#W~[>c^;ѬY ;YՔ)S/y)ZQpƍkݺujӦۗ:2ed`2dw+eZ%^78 ?p>}: /2wVXQjժi֭j۶˖-K zO|׎ixޡM4ydKN 3f)1!xիzuy^~FԳgO\RŋwިQ#S!;^Ǐ׳>+c /Ghz#5:t`s2H {}~^~F?nݪzH9sOd1v M4Iݺu 0!Δ)STZ5^~B_UV)22(WAk׮4\5fΜۜ RbB@^+IhxȔ(W 2O>Dݺus4BCC5gnd2 @^@đ^x@„ @A(Wy1 / z PP scyC{N sUV6'(W 2? 'Ntjxi1! PP ^cȑѣSkΜ94dJL+z Hȑ#;8«e˖6ט(W  Gz ^ 2 @^@ #F `0! PP &7#FP^4oHo: /2 @^@cyC5|5oT2 @^@d1vNÇכoN @fƄ @AiKqqvM q7劈ɓհn X)1%RN)dk4& Le ]5@璴T @6`4p)2%ȶ F @`@^4 Ȗ zq=<<\ , Sb@^4 kbyCɓ5qDc@Ν;8B ʗ/3gСCXc_(pz-gΜ;FTItI%\`*z?-[ܖ{ܒݻwEY>}_||d*ϟܹs۶mB ٘\ٳףTHk+}[J7;!i=q~9ή]t]w9o߾]ժU1~p\߼yjժec"pz S4HzmSpNT!r BCN+]Μ;@ʙS x?$M[%&J $/#Vi`Ś5Сv[b/_T2v?d*2-7WƌΟ; hPFΞ; h@HM/HҔ)!<ew g۶I&%/{Le i,SݓLV>kWGNhz@zfϖ.;OS87O+S}jN[$$HӦI7ڝV'7);S~; W*U M/pDž ر޽v'ŋ'7)3ѴK҄ Ү]v'EHRXIenN ${:LN; T)Lxi(Y N!ii^}M/ܶmҔ)R|I`ETvpҤIҕ+v'-ZHO=ewlXrvU]J ڝwI}fw XեԸ)5^`ܹW_ٝVEGKUڝ…vU}H5j؝"ۢV%&JSJ?dwX-*ew#͘!I`UtT)%^K ]N+InRٝo/K'JfwXQԯ+I^୓'>; T)񕙜:%!>lwXQԷ)^ J#GJg؝V<+vpvpr)u=N_>Dr$eK驧Nߒ:|$Y3gNm_;Nt7;~O; IjM/ Nj԰;E NxC~Sdy4׌fL>Bo_lYS89SZo_\9Sdi4.]J>۝V,ܤ;߮\I>gܯڝV+EGKy؝$ˢrGv'+&7)2ӧ#CN+ʗ|SYM/ÇNٝVԭ+iw gGJÆI'OڝV9s4~Mcefͤg;ݻ DiDzYSd94 #]+͜)cwXѩcvpOҴiRRI`EǎRӦvRhz@FY RZvpܹvU&թcw,d35kNʕ;?*:ZPYM/HWH|"I`E޼M~G 2? N; YSz-S8OiYM/VƏbcN+ONl>i8~C U6;iӤNl&i5~A $פڵNliSKzS8ki,S~A nf͒_?;Nli2S>G nȝ;?Iv4yeI`ExxTBv'ԩYM/'7)21cN+JN>+3pA;Vڻ$[լ));& .?nwXQ۝Ұaɿ[@@ \y1Nls.\; hHzS8;p@=Z:w$hz@j:t7;M)SkN+ڵZ;[wI_؝_KfٝV!կow g+WJ_|aw +4 =Iwiw gs$7*W;s%KNXF ܤ(X$~=yMN+BCEڝo ԩ?۝L):.\Ǝ; (Q"w*($xQ?^ڽ$hzȺ|L^:~\6L:v$JL}[ IDAT=yR#QhHzS8;p@3F:w$ᇥnNihYnjNjζl&O^; h&l&}gw-4=;+/;-djNX-Ulw gsJK؝VEG'+3;WZ@hz1ֶ mo$$HSJ?lwX*Q$3F1CZ$@hz7JHnRٝo/JK{؝V-*T,.]&Lv; *^*Uɓ؝V}wߩ#GND |aC饗Ncᆪ(o"Ev.^'`4 0j4E}6ł 7XPUTDu0 3{g]a{ hIHcNJN(4XxZtl""""""""""J@~S(u XHJѧ0h֮E'!RQt Eg۷NA5 Et Ex{NAM/""""""""""yxR'{NAʚ1h@t ENA^DDDDDDDDDDRkkIdf[.NB*V"L&': 6TJ>K__t,5@`$ ss5ed$:ID^wNBĦ˛$2XQ|Ɨ:-BCE'!-Ǧ*k+:Py"*JtRFV€%KׯE'!-Ʀ $:;wuQg_N񢓐bӋ(t,:"__[t RȑΟnd2IH EDDDDDDDDDTT݁ DP?oS&MDPtgE """"""""͕gϞ˗x)}HOOG\\tuuaff055ER`eeXYY|򰵵EEXD L<.:L&eat(: ) "RHHΞ=gBd2 $$wT^ݺuÀбcG HD3~~~I7_bĉ]hD@^@Ts$Y<֬/X4 =VNLLD! Ʀi\7n@ODZ%>>'OqYQAAA oJ*aȐ!6lg/_ .#\ll,~7]]t+#:i#:%:I `?H /7czzӐbӋ\D={#G-֯IO>1|k3f'|Ԑ;}={`ݸqd2H;<Ο?F҇Djdxjhժ;;;ёH[˗|Tt,۶ɛ;NBbv\9It@lzQS+ IIIGgϞEffHE֭[4h4hիWM6#ѣʙTʔ<=卯Di䒒uM E!Ml 64 ^DDDTddd 00 ."*RSSqQ8q222DGh׮+,X#(99'Nđ#GeTXQt$6ի̙dy~VMto+VȯE! µ%XYz50b,_/^dË1ydTT ñcǴ/LM6N:سg8DDy:s 6lcǎBڨE `D)=} ,^ z%: i˗aӋHBÇGzdDFF1h ?)))͛7ݻ7VX!: iO? BÇ5@\$)?VbcE'! DG(6VZ֭[όZFF\]]51>Ct E/7NRzw -Mtlz0h޼9\": Q/_d2QHL 8:NaKouN?]D  nݺGt">>+ұIDE[Nc윩S33ibnbVQdʛg}ӵڸQ\ /7i FR{lzSzzzGӦMѬY34k 4@ɒ% U755wޅ/O?iiiΟ'O`С8~8ttttl"ʝڷoVZyhܸqflZn֭[c֬YǦMn:DDDI???]cǎ-:xP0^6oƎ D)0=]]I԰/B:tHt,=&LAFlz)IWWcSw'''tA%6I&hҤ &Ol߾ׯ-x9y$֯_o$"Eԩ:t耎;^mѕ+Wٳݻwc RsСCϢ%*2nn&ŕ+d9z(WNޔgx5u$YN,,#D'bBWt"""6l-[cݺucGݱvZXlzYd+ʔ)q֭[E6md\2e BCCl<"m-Z`Μ9z*^z={`ܸqW6>!CBҥU:ׯ1|A+wwzu) ?.:)][t Ev׬FRkEDDDY&6my0r A6m_`齎;… 8r&L0ѣG*H*U | >sCc300+sΩle˖ *(GJI@L4r%,,GGi 卯ɓ峾Az:ijRtRsEDDDZ xܻw[nڶmED ;w.BCCqy7N^p3F3?|||p)Amʔ)`˖-xۇCjLCvvv8{,O1i&'SJ&:V<ac~ԻwUǢL/"""EDTڴi7774^;v U6'uV,V#*\]]ZOOG C&I>1m4s%5Ҹ10}:`$Y""ŋ~7Qx__I| ,Z$lmE!5U|~""""*\022`llƍCZD)FpvvƋ/T2ƍ7pK%6f K^;88>>>prr6Qt ~-_ZP]ѭ[7zhӦ '''8:: M6/ּ~:.\X鉈T'66VZ""vijk׮գ"&N>zBt Ek 2CEP!f|MlzI0`U777亱L&C@@<==QJi?:t͛o޼QADDDD/^@jjdT"Y-"nޒjҤ %G$_/ۛ j{w)]l dfNBaӋ0sL4hCFFF322q㉉XbjԨ~z Dff&tu9IdJx7"* S=\N*Y-"SId-F%㴿?nɏիQ1e дZ'#3>8o길'-DPt({T@lz'p|(]k׮&_|,"""""UȀ5{S.3CDp=nZ####޽&&&%*6<=n!̓ _Fas6qwDPs'pTlzٳgVFbӋ ((HҚÆ i(L:M6ŵőҲeД3]HțmAyĉxXt16<<33I6P>ETwc*dɒEFFl<"""""dddয়~%$ID'883f̀-Zǘ9s&;;vv& w}036l@jqއJbbˁ'OD'|`ӋBBB Gjjtqqw7ũl,"""""lذ!Ǜ c000&i/_bÆ ޽;j֬  &`޼y*MT,5oL$:Hݻl=I m_,/56 ݻwَ%$$Oec`ĈWjDDDDDRzfϞ-iM##l vJKKãGc+Vѣq)dfflYfa*OTl} 0|;!!hߎEyNNȑS(zX /:Qq t~Sͦ^ *V(iM"NZZ<(Io(DEE!22AAA Ez/edd+V`E:.Q2l p$ 5=cp0zz#ܗ__GN5`F`xPt^D`lllϜ9׫o6m`aa(.HDDDDTXXb5 0m4Ik0p@1Trؽ{7Zl): 8QޤvMtǺc%̌E)8WW * |Yt,ǏE@rr=7~xڵK%a„ ڵ+Zj1 +)) CEZZujժIZctttlx`g':}?}=^e<=ZDP{7g5)6 N:KMMŐ!C0m4ƚ5kN>H^H x5MLLO?IZc4h???o033x167wG Ai':Jɯ)ssI6qVbӋ5jLCƍq"JEDDDD$ޅ C+9sFDD97n޼ۋCT|U"oRZKL*VEP \ <~,: }{ziElٲAQKݦ}s}f͚ݿ=z@1c :::Ep"""#z 000@2ePZ5Ԯ]%K,,wA|۷oqMDFFݻwjԨQp-<޽9,--ѰaC)SP#44111044D2e`cc"Co޼!C)i]{{{L0AҚDD9iڴ)\]]1x`2"I4iL,Z$:I.YVeLX!J$")%)iixHzgw503gį8Ө0c0$Y^_fs$ ɘ>}_4֭ NY5jܽ{7_?wΝ;ZjaԨQ1bT2233ݻwxi544D6mзo_ :TOpp0Ο? ϟǓ'OD(QBqqq߱eܺu 2,[=;;;`ѰWDlٲ;vKrF__-[ķ~CWWfm6ܹΝCzzzCݺu쌾}QC"<<\K,uB /0l04nXt"Խ;|85gʕѤfMԩT ml_*T)jfdfś7OVP?|̹;w0vrl:UhtDE6N%8X\ސ+[VtǦx‡IIIhso߾zΣG0m4̜9{ѣѥK7LrM6axI ___bƌ3f O 2>##wݻw1|o˗/GÆ %FDDD9sĉꫯ>:럈ׯO>={D۶ms=$4xIq$ #:6nu~j 3 [KKZZ©iǃ""p v MΪ)upן|RcڠAfd 7\]{0lzi͛7+|Xf->s*sSSSw^ݻvvv * inݺ#G͛Y[[e˖R J,x}dP;)) 111Ri6sss:u UT)tt?o߾Ir}CD$x{{c~EM"68[9~l\Lѥqc\X '~U+1DDL(6 TKKItItŦW1cƍh׮I:+W˗x +,, fBժU1rHɚ_7oƔ)S5eT>7Osmw^|7y>nݺ8y$^6lXYb> @GqF 85BʕQ^= 0Gݻsl̙3ػw/NwJ.͛7# nnnhѢ*W5kC_Ct5׺W\ӧY={fhXh?q0z͞=7nP:ѿ2220x`ܹsG7ou>&66֭zD$[[C%@*牱ʕ*ɥ͚j1F*o|TDk?+ ^ <|(:VґiؚW׮]ѣGDEE!::Hgf\\\(Ww)))r.]0˖-|icccL<ӦMR5ЪU+*PݻJ]kkŘ4iRk.\񜁁_`۶mZo}ݺuos=_n]<|(]4N86m|nbb"Zl<~)=:9ꫯecy& wPZ5DGGg;׼ys\r%2(00u} LD֭[1bݻwգ/((5jx7$Hƍի%kllW^.C{fffcHFGGO?T H 30s7x0)*gKR)5TBvvY@Ŋ0 7T:N&MpzR~~NSWm>#?s*MiL`̛7Wƞ={ .x"""bU5TbHp/^[n?saÆ4T :4[ 7ixҥKx+P z.-- Xvm7ooPX"_ /(UT!---_>tĉl /Xpa^`jj1cxڵk? o\ /^^ / +*d2ۇcƌHIIHt%IJ~A!^ X`°RW/G֭U:O@;1TcGDP ,_ps:5j֬):Z Akԯ_gAAAAСe SRR ]%ǏÇ6 ^JKK ШQ#.yH$A>}DPlYۤT)3 R8o/'R'nd9MqMZjV*zM6Ç9sJ*MNNƀpԩ|=>11K BWԩS.WXP/ ={~g5kRs=?9rAAAَ4l0}|}}T]"""^.\!y-Z`Œ%"R`` Zj 6v,жNv",5 ? 1n㔿モR'g۷N54ebbkkk1FN|hTT gFpp0N<#Fz}t 4O֭[͛u666_bEz¢355Efr=oggΝ;+~i/%s`zz:NoF^ z~c׮]^ҙԏ.$EٲenL8GVMDZSGt E{GN#F`7QY"88Nh)PV-zJt d(QBt '''899!99'v؁cǎ!99bbb0~x:t(mڴ)s-[,^z~ÇB~_SN\|9s7Vzjժzm} ٳgs}:6oތk׮۷8w~GmV%qA8;;+u3+|yyBU".XxHtltttqd8桵GnMUuYpK"bK pCdjjÇ㈈ҥK tZ*s~~~n:inn^ହ =F^ )L(/Jӧs=sN<o߾Wtt4nݺ///9.DDD1֭$kע&"bddva֬Y8<ñl24iҤH?s ʥ oRWŋgD'ɦtɒ7kJ`/"]^_Sᢓh,6@ ;j'ND`` vؑMǏϵuʕ\'eӫVZ[ =FaLR5Ke?猫S*W7n'"...ϱT'iX>TI 믿VIm"n+V gggڿ?\]]U:V [)^^@q/ +U_J^7*6|ڵƎBQh(t,a4Up4^Ň.Kܻw={^£\W?~8,YP?TD TT) U5w DlHD8;;͛*)S6ч:v#GhSV^͛7t "ҿ?Яn֭#13u+Ws$)L>AS({X+FBD5LQ255EݺuQNQ4£Gϴ͛jٽ133Çѷo_>|8Lkv<444礤H=KK\NjtbF|Ϧm۶ׯJSfff$""/99}.\~-~'&"M&Mp9l߾&L@LLJ7n6m뫤>;Ot,gQ(0#G9=p"ָAKTQ(⯿{bN#^%KDŊW\Y`!!! tuuQF TR+WFʕadd333D7@}- !!HMMEXXr!R~}CݺuQY~mqF,\|*U);vAΘʭٔ "^y}}|)5mۆ*UQ"""v6lΞ=ǪU$"\::::t(ڵk!Cŋ!C'J&ݻdٻWsI|޺5V}x>>>>IyL~Oazzz]4Y!QQo>۷/vD$\Vf IDAT*UWWWYFwE.ym"&^N# l"o|A0|I^wO^)?DN&Tǎh ]n3ϟ//[2 C 6kԨ<0̟?|dKm۶?>BBB0{lY0fIƢ,;=֫W/T\|0yF\*QD猍y7o.>lῳ=T)33#GΝ;UR[nرcgիO?y!$$DDZJ>KKk(Ա#ML$y}Iss5{ӬbLc^5kvUVlcǎիَO0ׯ_G޽U~7!̙ooo7k֭ڵkU:pW@@@>,s+f?pɒʫUB4ʕ|ddd%!"""m%0vXlݺU%۴i(&"RX @ʏ?(y]"fo/oRH`b <\t y6ּFZ_$* X(4V57o.99>|8/_^w2<_±)S܇3N>"1㖖9^P/^(td\UVMq4ǚ^EL&Uv\˖-qq>N^۶mÇ%KڵƎBQh(lY&I<^PD^JZSmjN(<tItbO^ c咍QPnnn C(ʖ-_zӧOVVV9xݺu(Wyܸqc4˗ϳ9~d2b*߰aC;v,ŢEХKIkfdd`…$"}N`:tjSÆ(%,OJZO|嗢S(a+%*mze@Y_N:혋dv)![dxP˖-+sjiӦ9>e˖y֓ȫպuk4Jʳ)xE$%%a""""3fʕ+URv8ydDDJWW[lukΝUD5 Yt E}}gaHIt 6 -Vtid;֭[7+VZ gffbhgz'pȑ";ԩ 0''<7}d^|RJSNڶmwaϞ=EfԨQgϞu5""uUbE_sNIk?݁ DP?o p5u*Ф$mz͛7kܸq vc(fڵ+}(r?E0}ڵkَuc^[[[j*>>>Iq#ڷoy̔IDDD̙3믿vJpiب>}W֭[%GD*V"L&': ׮-i=/@ >ߧ覗 ʖ-HLrؠB WVjjjc/^t'meccCCCc.lpAjՒ$:7ce$]p._>mO YIڟzuSKmC]JHHݻws=M:Cq\RN:3f8֦~<<~i3 b۶m9reN'ka?>OH2e/FD$>3Yk9rDzD6m)4I/A3=-QNz))2-['NAEr۫?$ڵk|.gw$kZ;&N( W⣏>QECСCѣG\i8:u*5jTZO&Bg3gyOdddmjժ3kvލQFiZh T:0 ?ghFӦMOD$'|"kK.ZBݻ% h>4 +gJ%JϞNAZbKKr(?߇Jķ~y+++)?ر#4͛ѶmM%kkk]PK,#?,twoqm۶^:'11NΝ;y>)) ˖-+t]غukB9s&ի5k׮Ev |oz ƍ_lDDDdVX.]4:͛+RH$WWWT\Yz˗/eG uKtֈ@׮Sh:}Xg]VlzgݻNAZ`KKիWqٳg֭[@C'''&* 93gP^=;6rsMt 14ΙbÆ / {Aٲes դ{w76שSww7/ލG}׍ĠA: ;w,p]Z%K`:t( \ ;v쀃Cם;w5ȑ#qͬ{!Ν WWW,Yq[[[ܹeUV/|o>lRDDcǎ͛7eG {(s3SMNi.`6+kӫ-oo?epR<~^~Z֭[s} -/_~@ƍڴs#֒m3'==K.Eѯ_?Z QQQ9Sոw,YmۢaÆ8tPlll}vO܍7ƩSPj碢Ю];;vuΞ==8ުU+5vZL6 uƍz1b-Z `ٲe5jj׮ׯnJJ 6ms4ܹL6 4q/+v,kƱcP||KJJB@@:uKKKTV m۶EfP|yKncc}zI"""2iժBCCeG [ 06ǏtH+^YYH+^(3a&ĉQN <|7o^%j5޽3gɓػw/>7k(y{ԩ׿>={`Ϟ=RJrʰϟ;fӦMvڷ.6Ǖ+WcvSTT:w.] ͚5Cr吙\r;v5T*xxxyrܽ{y?WիoL/>rƎ[^K.۷c֬YZ]f ֬Y{{{ywq% <gϞ}iiixQ&UZ{AÆ Ϻu0rHE>|gnn۷C&"75jԐ^xxHaҥ=Pg1$@24CUIN/)=IIKJScc pZZneڛ/_(UlmmaffkkkXZZK|111ǣG\M4qI$#'''XZZ}/QFARaݺuӧѣ9Kܽ{@cUX>>>=z4LLOlٲرc///zc#Gȑ#ռys,X 0^J%K`9u+(###1sy벉DDDd6mڄ#F(ڳg>Ck飚5kZ/""Bzy@ =SԤ>]tltO`rBKZCRV U&5SM M/-o߾Oӧ ]Νu>fqRPZ5_uZl#GΝ;عs'?أ+7NNNh߾=^zxTa =zl2>|laa={bM4Kzsilٳgjiidd?>|8zj\v@ﵶ<<<иqcmٱcLkb֭lxQRreYfgR6²_g"el L,X :IpOJΦMɓ__I-*c%J޽EG(VV\ :֭ubDxx8BBB/^~(QL899Ut民>} !!Nµk8 Y&5km۾~4mT"#Ch PB Ə#""sBCC4XYY5j@˖-{FH^"""ҽ]vaСSacӧ쵉 ,,,r-mV!=qԡ'P!$@Ϟ@LX4 (UJaz%[-+>o>P v$6yԩ;&:J6mFQh1Ujz:%uSʕADDDz:t(eeMDdlll"I`ر#_NY7P`K DVRZ°arNBy0~Y'Odv! 0622š5k0tPk 333j)ұ]mD mMȰ%nUgϐ)>ӄ @VSP* b̙cI&pssdtORd 6LDD$11QZE&Ć Sgg)4m>,{ِ'OdWM 7P 6dnݺ VӧO5/^ѣG^Dr2bp$ I>Z Z\*kًZϹ|yYP$vDd`ll۷kדR|}}Q1rCRpB;VDD&)) ճ 6U$.NBBd+ymj@56VL%¦Lq!CtL4 cLO^RܹsGDDѣGLb&<D'!m4o.O"#{"RqBeLhk)_ ooo||Jo:q9t /_T?S*RZtҲ#=p>d IaݺÇN)(XHH(R !^IU;#GNA`Kf&&&pi4o\g:99?3t6&)իѣb ^`r7eGzY` 3Stưa@^Sh:^ӵ. /]M2Wt ^ypylٲE旅L[nCDDDDDu5tԟ9s&OHm""CvyYըQCzGm5aЪlYj5uݺPT,־ hFtM/T*… 믿W_rʲԮS[,uHׯsΈS'fϞHm""CVq)YkUm9":i ^]t M7m.\XY[Wz%7P%%EVЪU+미}6.\/ÇǓ'OrgҥQreTT CƍѮ];N'""""*ݻ?2?'ODD͛xlJ*'''JIVƍE²||OOsi$ii@@tOb&{lz4 DbKT*ׯcĈ9gdd˗DDDDD$ѡCDEE)RS6QqEe*k=Sqq?0s&": V*RbTI=,Z̘`7p-Y#9g,ibEDzvvvlx0AAAر#>}H#F`ٲeܗ(X~5۷o/k=c/Ia_-:Ǐ??|d}NB%FRtM/""""""ѡC<~C m}Xb^DD.k_z炂ŋ/D'!mt 2P….?۲=dYtNQEDDDDD$PXXtHE껹aʕ02_򒕕ٳgZڵ&sի IHC}N%`*i7Dcʕifb[^Drs Dz |PE47n"M6ʕ+lԨʔ)#kM2&:ihFt M7kFW׵Y3^;r:c":@I{ի Ǐ/^ôiD$"""""@CE6m GD8xyy^w޲$y3`ot.: i$oIT^WKjh-||X@(I ZGbƍطoaÆnzaΜ9 ?mقWɺcG]7c!矕ѯ)o+:EŦLn߾ኍadd'Ncǎ9ΥsQl|"""""ѽ{wܼyS[Ea~ `۶m^z!55USLc%`*i/*L[.~7n3]4܀'zZq*98s@G HɎM/<}~!bcclٲ8tƍ IDATZ7f͚x"""""*t/_V~Vp*UJDT|ܹsnnnxqJ>>?iii4iFtEprrM͢S f  _ -ر !C:kر(ee5wwC)6HVcĈ٘&&&Xx1<==s=waӦM:CDDDDDKHH@׮]q%E7j;J.H}"*ѵkWɺZ"L7nUV7ox{{Z1m(5JImh=aLe#KeU ] #FNBqqx':mmqQQt"E饥t[1fM4:j믡Ve:{&<<c(NRaʕh׮(T\l 88}NBŘ- W&: bҍ7r=ާO[WT2e .][ʚJ*bbb xDDDDDDDD%J?>Q8֬_[NC^ EG!R*eK7o۷nݺɞL2Wոw$&&&Xb<<|Xڵkc֭8r^leAx{ˋNMfN"J%:^za'M'N(H$6:55syW^}ƖUTQ""""""""'ǴiPF Q,Y[nᣏ>Drt||76+`RIٲ%[ơs&031I68 Kaߜ$wܱcKKspB,áCЬY3ܻw/ʖ- *(̙3AAAuf͚:uꈎ5je˖Ç3f LJCt4KDGN"K п?mߎm3f^Pret<OHЩ(bb ɍM/-n:1ZQFaҤIHIIQl@ݻwG\\[oҤ TKDDDDDDTիW| ܹcҥ EGSSS 8NիW1zhXZZEm[૯DԤD/Z[cPX:a^MvT|޵kjYw4xX<{&WE"-iHMM8V1o}O>>}'OĹsp5EWQBʕѮ];t{F2eDG"C4j ܱcd;u ppNy/]=[Dϖ-_\7>Dpd$?WdT]4>Ϟq#.GM">|xMKHH別ٙ򂗗DDDDNٲeѷo_[nҥKtnܸ;w AOfMvhݺ5ڶmvڡjժcQq1m \&:I;&ŠAJ-ZhNH@pd$B|8/^˗/äI`oo/:@6662T|Ԭ)5)f$[L 0>0cP4TX..R3GtlqqtONCTh\n:+&O,:Q{Sh gD'!mh!'/$DƦ ֭ŋCR@ڼuڵxc)4ݻ5E{wD$S,GKd(ٳgXt)Zh!:QNY`:@@ϞSh/`j #CtcKF3f?(lƗJ_Ę<hDt Mv[NAښ8QZPl":Q%3///lڴ Kذa&NqJ,oojU)4m;&:ipqBCS^ 2d];d&MŋX&""""""""*lm&$ْˁkD'!mX[K$RS+˗E'!z+6cǎahР"cTT ˖-ŋQNE """"""""|HM }!!6Vտ{s`BIŦׯСC8p T۷իѣGȈDDDDDDDDD´hxzN)"D'!m4i"O"#{Id":@IRеkWt8qΟ?/ CJJJ-_<ѤIj ;wFuQwE')@RPa}! ^-:I `"`TZlz阥%z=zhGRRRSSK ,,,D$""""""""3 &8p@tl%5MƍL8|tO':I UD!rzvvvcQQxzJs.\$۾}='6Ə_%:I{jpI4p3(4o<4m /GDDDDDDDDDT*ݏ ~l >,:iUKt M[׬F"e6l؀+W`׮]:u*W/ҧ=RS+˗E'!m^^>.-sOcKK8zhs):nhh(^#˚5kаaC?~\񉈈V$>Xx@tFR3U$$AA`Kk5jt5fE KS###ѥKlٲE DDDDDDDDDTM'N)2<Ѩ0m<|}{H06K.B|_/FFYYY>|8g|?Bt MAA위IH:FN)$DE/: plzA۶m" NBr4Υb^cwo)4]H2<_t M/KƥNB%^E,:ڶm@-[VxLL f͚%(i?hJt Mq3ru0iTUڵkcٲe9/]Q@͚Shڼ8p@t Җ7*:[}DM"AA222vZAHԤ$[z:j!SSpt$[Fz5矢P ĦW1ңG6lZ- P' @P${J$˗ߢP æW1RjBCCqi(WӦNcO O{*"Bt*A*Fs=~'!""""""""|u):_E'!mt|BB .Nt*!*FnݺӧO8 Ր!@߾Sh|XHM1h0p]/E'Mbڵkؾ}{ݻ4DDDDDDDDDT _}m+:ÇMD m|=E:a":>HJJB\xycPݻy.o>@zShںؽ[t ҖShڱؾ]t *&ʃ &L˗/iӦhppp D'ɖ] 9#: iTItlj4II`-ԩgb;1H[I/Iz,] ܻ': ilY27$[Rۢcӫͱe|駢ܹsQfM1(֕$* E'!mԪ%-uOyPIȀU@&&&@.]DG)3gDDDDDDDDD$v퀱cE ̟ĈNBhBSX ^`bb-[raΝ={(DDDDDDDDD$77)4ݼ)-K$: iO`P)4ݽ ,Y"-ITHlz6m=YCժUCݱvZc݁Et~#]Et4nVNBF?:7Yf]ŋѲeK={ȀU.5)IL 0>*: iZ5Kt Mqq?": 6dpelwcǎa޼yE/88qqq9ǣG|r """""""""ֲ%0a__ *JtF_$2R>^E޽{#11@CVi̺uڵkhڴis/^@=\1 &:%K/E'!mt .: `" !AtlzٻضMt ք @VShڿ7)H06`ZQ KƦM`nn?0""""""""""T&:#GD myyիNif)H 6c>|sqq;#k =t̛7Oֱ*x{66dKIV^aeeʈN-- .^aKK@f!לUT;w"y&MJʕ+ Yj|铸8 QSϟlziܹsV}ԬYfff{G)Zjh֬Y㉉8t"ci\KDFӧ67Zt M~~үT饥+W8ɓ￑O˗?ѽ{wE3c]qȀt .: `b IH;#FN)8XPE%^Zxmoo~ FFJMLL dF'ODDDDDDDDDd zBӹs@F$C>}Dtjlzi)..N!C`ii)(&M(l"""""""""-Eд?oS<<6mDt y#lzɤu֢# 111joq"""""""""{@Shڼ8p@t Җ7PlT饥7guUTIPl>ܳgt )#:I4 |Yt҆rp$[z:f p$06dmm:))IPlgϞ7$""""""""\U"5)6*VDД,ZNB bKK*Tx}5AI$7n+W<4DDDDDDDDDdP7LBǀ+wտӧ/!: )M/-U\Y둕%$ZɓJGi u !:``Biww)4@\$6TF wŏ?($95:JCDDDDDDDDDkPO)4]Z%E 0@t M׮ITj$$3 Uƍs9s&2220k,T*3cԩX`[ZyIIF )ӘSDSF%KASaH<}[n… Z7ڴiHDDDDDDDDD%TJ0i$Ef\͚Pa99I/OO +KtɫW%RPj.V/Ec޽{ѻwovZXBgVrr2\uF`mm-0QRRRpׯ6l}Qaac樃;9)4YXUff6^Edn{htΘ| gzAѸq/3(B2eйsgJDڵk#EG "zDD21RR )8IM~O5:z=hԩ#c5.EDd8ӫO?,+++L0Ah:aÆׯ rs_!CQ`""l.^ΝH6jBuE.NV-ww5_!bIQD-ҢjPE[ZFդZrί۴k,.:Lcߗ !BBV ?>SK}q]^N~{|ŏ M/.y ѡCIX>}:>⺞DD C&Mxꩧ,<<5'NE .xADHWW#+/7x]_KJƏ.!g+q~k_}=]vxw3,d2Ig=qtL&Z5<`*djՒ0+(V.B:OqIO)(_<֯_eJQIWt+,. T 89I؟ҥkɞaÜ5!;'"#u\{uؼՁW~t,\wlX C]S@Nt ,^ڸ|K'شiv튓'Ovlܸ͚5k """"""g@`0 ۴k0ctY^zMd{~s дi@rʸ(_L*s__5S8 0POթ#]CC`,uMխ+]Crسg:t r~|7իHGݻÇKWX ;wKH]Q+,ED%ENر^nݒ.!-ڷ&Lt0{q襳+bCaÆؿ?Sl$"""""""+{5h?rXΖ.!- tlZ֓l/HWX:sXHO.!-^z P9-JjZt >P,!=׿uk6T%f9^=.oHDDDDDDDdKvF̛$'K:cJWXzU]SII%E "#Ր"!Ah8Qj@/]B4O҉e@Vt iNRKeI3*^RUFrjy:=}&]a),L-ɚ*]BЋ0]+H1cS_FuըQT#ٷX8Q !]a)4X ϗ.!pEDDDDDDDdL&۷KWV&H[+HS~FeEidOҶml0mPtYn.r%pt i_5jH's.!L&SҚ5޽մi@z֭SO(AVV~G|ӧ|}}888L2TG$j8a2aQ=N.^.!-VU7KRS^Laa%E@@PtYzΝ.!-*TPהtYFڇ)C/+:y$Fjժ^ϟGRR2~ۄ3''o~>EDDDDDDT tSrOxC6_?` KΩeӥKH>}^dMM.!-zxCRxRRKJt4g+HNNv빹>}:z4lر/#g`: j(kW KWV9Á=+,eCᗑ<\ J(zdѢE?|TRXp!K8=`#ѝ;wsθuLj/#ټ[ jTys K|l"]AZM l)]aiV 8XWM% ^: |o~GTPᾯ=z{Ffff88W%fW_"]BZkKWk{JV&PtuөdL&K҆ ?JWzQJJ f'sغu+ڡC0hQZU )KRSE9d{TQ7p?JLz9ҢbEuMJed}O.!-\]5UtYVlptIWEݵkW]KkW0||?#ץKHFiӤ+,%&g%ާyҢn]]SoWH=ݻ3ghÀM2{t:KWXTCҢ];`D KׯfUmڨ$*J KH-վqFrㆺbbK^EvZJg`ĉѣ}ߺu P ttZlO>kIWXpA-iuAϞС.],RRKH=å+,AA;%vC"XftsARW_ݻUDDDDDDDd(F]HWXڳ0`p{w Kz5PP ]BZ~ʕ@nt ikjHo$G˗%vC/nܸÇKgnݺzjj*6n(PDDDDDDDDƌWWW4|oMԮ-]aVPڻW2u+,[tie2^^6lv풮 M5a~8ȑ#ڐ!CpY\z7nħ~)S`„ Xx1;777ݛy~Q Q,]b,Y"SAoM22ԞqOKԊK̲eˀǥKH gguMU,]b9|Ĭ C/.]tkXj֬Y__߇7&L{SjzRRu?٨Fԧލ$!=.!- e#u ;zUWxTr20>pt iFrn8ڵk6~x 6~Ez'lٲZTT#"""""""֮_fR0=<L$]a)* 9.!-ɓ+,ݸ11%Eԩbbԍ%vC/-~_LL+'ʗ/w$GDDDDDDD6o_W+,],ZJzCJWXt XHI.!-zιsG5JRDz03SqQFF[n5k (/^|5#""""""l{w Kz5=mo={JWX:xX ˓.!-^{ Gґ#j_&=/$]aq4ӛPٞ={ҥKc ٔZu+ytii#]a;+Hm+,qti5aоt˗ l^+Wgdd 88_s/n] Kk!!U@Ptۥ (L&aC K6;vHWV&-]A:K?uYxG!>>_\r1ͩTI )\\K22%KӧKH K̲ (ʕSהu˗89ӦUJN8ҨN:x"BCCEZ6oތ/^"""""""4hn(ɭ[ܹի%Ez2^<=>u`U5jkAtF 4{ڱd :O\;V]S{JziTT) ,x\r ZJPt@LJJB%ٳ."""""""0]nub ^Q@%ɤ%f6)GMOIUFz<&П;']B `رp(}6nݺƻ9 IDATd^ХKxzzv<""""""B,N.!-ʗW7*T.1/Wˈ)WN]S+KeT#=>'' @ "7XXO5spPoЫ|}}M:Ǝ+@DDDDDD ;.!-M̟DDHꆲܾ /Kj{W]S.Iժ#̳%z_|ҥKKg>ƻ8*͛㭷ޒx?Jb""""""2Eaä+,')ܑ.!-uFt 0o,]BZt #]a)"B=휔$]BZt/]A̚5 ͚5ΰ0dK:~U@^t i1dлtÇ~L:Nh`ॗ+,?,b嗥+,:,] dfJJWcKe˖Epp0ݥS 6ļy3` wm+,iti5aоt;+Hc+, [']AZ t"]AC/x{{ᅦh;;T\Y/_ti4l(]ai` * hXƍKWVӦMJWX V zJC/m9͚5oooJf&lpt inU$]b;zT(S0\`J!ҢT)uMU.]b|/%մi@zGA݋:t#GDDDDDDdKsҢNuCHn˗KHZwMݽKKHj5(]b ,^ IUkYC/+Y&v܉%K?EP|y| {ŒHnfbcKH-)S+,ݼ7KHf͌7hǸTTt iѨzڙ Ǯ^غu+^{5VZ?OX=z4.]O>5j0a0e8 DDDDDDDsG 9Rҕ+/=;GKWXPO&%I:IWXfKH~[n^IIIԩ 6󈉉1g4ogΜzMHcqk'''t .ĵk0|ڡDDDDDDd/VR{<Wұcj/=KWX:u XTIHu]ٳ%@zt iѷ/vxPNNz#G<{1p@;wXrvv+W^y ñcpDFF"&&YYYHKK*UWWWԫW^^^hѢZjrYH!%on%f;wa%Ÿq)KvVהўn3zBBKQ#]BZ۵K`Cs>r/D^ʬTR񁏏O&lؠn)]BZL?ptƍ[6MP>ȇr!]#Rq PbX mTjN.)nyÂ,X౿V!""""""BV.!-ʔQ/\tC%ERꚪ^]Ch|૯T#N@PtYAzj ԑ(nuEDFF>)Sz1DDDDDDD@Ppt iQRɝ;@xt iQ\@3SSE0COpZx.O5Jԓ..%%ǩS |+WÈAAυуܼ iL )$69.!-6U7$.NOEEIo@̚2 wM0v7JLL|X"v\?ڶmZjlٲ_>|M#""""""?|?_=EK`h Kܹ@Rt iѱ#0ntH`lٞ~[j ]BZ< 0itEewC&M}ݺuyfT\Yو,fz_dɒ{'CDDDDDDTdǎ˗ /]a)`R 3S_]WFrӥKH}W_tZ!z (=֮] ???$&&"//uRJE>V\+V# `Nʰa%Ÿq)KvVF.!-FVTHtYHt i1bzjn55n ]C5lzڹSD'\I\rТE ./Ct~զM+]AZ=$8; * @g$_ |ti5e }dغؼYz]uk^\1*&&F::rr+ÇKH GG5VM,/O]S%Ԫ%]aVP^mpL&N KkF* __İWݺu{-77_rrrEu"""""""s åKHͿR% ,ZIzȖ'"^LK*ӦXQLLF3YҢ|y>UtI`ЇIp9]äDEE!::Z:Ⱥbc365mn(I\DEIF̞ \&]BZ4h`k*1Q]S%Ezig:zUR^^^v>Cff.2ړ^[lN """"""*@Rt iѱ#0ntH`- +,] %3#]a)*J}#.NhZEVewC/h۶_OMMdBÆ  K!i蕒YfIg'ˁ,Ң` KgK%K/KWX:^-u&]BZ "]aE$kJt i av.^}yףQFhԨFe˖!44IXDD}\v }E'"""""f.`: jsg K!!ڵȑ@n ((.!- S # VKH!C@Q:zr!##/7<<X|*U ʕ+J*sn{="">>>p)o߾k׮!77XGDDDDDƖdܾ}?Ґ;wX| *|UUe i xx}J&ګYVT%Ŕ)jʓ'K̶nU%ŻkQV5ӍxyGwtݱˡ^u,[L1n߾۷ok .h>/ѣ$&&~]v FLL nMJPfMԩS>>>A&M*Ur"+Wj_=&|tRPY;Jא&@2uԐhO7ڸ\DǾi#F11@t֮Jڷ/֨R.Vާ\.+v9wy+V@~~t ٰL\xaaa77n >>YYYsp]F*VRJB ]6WOO{lڴ)P]8MDE3gܹs8{,Ξ=ֲ?xڵk^^^ر#yt^^^DHjqc*,";[FIMU{1yx>>5TX* AgB=6`4gp~v;[anܸGȑ#8upڵBœv~+|||O駟FN_lҝ;wp19rGѣGmOIDD"""j*@ݺuѯ_?+h׮d䊋Ӂ:uk7VC?|0@15YZ6TӥKٳ>Փ!"iuߗ.v;O>۶mCbbt L~~>N>ȑ#qFwݻ5kk׮֭^xT^ػȺo>ݻ{ůڵk3g̙:u`=z44i"F%ի9t V @Pc}VZHOՐjUkֶ;g\(`L5Qj//. vN5zj~ʐ.]‚ 0`TV -[ɓuV`ڵx7QvmX~=ӥӈH޽SNEVPF +X`Ο?o?ٳcvG6M/ vYF SKIO{CHWXxQ-ɚjȖt"'gϞOa2Sؾ};mۆK.I'Z^^v؁;vB 8p }% -//۷o޽{g;wD7}iӦ裏п$*IBBԓ^o%]BZ۽_0@Nq6߯3mϰaڹS,4T5~<}~W-HnSJgQ11l0T^;vČ3lrg)))X|95k[,HDr]sř3gJΟ?_~m۶Łs$k`V jy~3NjFٲE"ݶ  "xW"^gaɒ%(St ,//?F5jo߾Xz5nݺ%fغu+ڴi=zIDDh߾=|Mܾ}[:J`j& x) dj5k={$ (Ǝ.ߺuO?IWم3ѣGh۶t ĉxQV-t+V@rrtVڵkZh{l@AA+ Tii-[Ԑbz 0mJG۶M]SK&}.1{2DȦ'E`OUZDDDd0YYY8p VFoѳgOdaoL&SҚ5= H%)ׯ~Y2 +,mtMˎ⋨\tL^^FkJ= )ʕ rKӁ%K>_dXvǥ3OFT#=> @=jYYeJHNNFʕv0͛c?FDDD#Ff͚ڵtlٲ_>֭ڵkzpww2e"%%HLLDBBbccK.!<<Vɓ'{a >.1KLfOՓ?y,]*Q8IIܹ򒮡ª[W L.1}7O]SO>)]CM"WBB>3lٲׯ_zV_K.ui"T*Uʕ+lٲpss  ::QQQst2rrr0p@8p7!BV|}}Ѵi{lРjԨ p|?TpIIIݻ7?77b??=777M6hӦ Zn:u98vZ QPP#F… (sR1G=a~tYXp!0y2Pt X{78 ]x8`lO[%f/kq-[eFzz5* ?:U-CLd'r蕘^z=p<[v믿_]:&9997k~~~ *UN:SNbؼy3֮][O]|XlY ;v?N:E(]t=hݺ5Zn>| f̘ŰHbb">c̝;b4|%f0~"Ttim~!>$l߾'OFV 53''' 4GŦMkѢEyM IWXڶ ؼYuo Urfv`& ղFcati5~<Сt]u+tcwCl^?""B*WcǎYfIDDDWfM;8t"""`兖3rqq|ח_~iyG۷~."RT!C`M6r=3g`ĉpppڹrrr`㓠n] K?,]Qbm޿[??X4]XKm`pM5iJWV&#]a)8XO5 N>]7^#DDDd˗ȑ#h̙3 BllUAT… ѭ[7899IgYUq!-wFnIR0w.Pa&.Xd?zu`6:m`|eҢvmėܹ,Xp_TzH7MI..^.!*JɅ K.믿ij>ڵkh@xWc„ XtV 0WΑ?V;>QIv~Ȯƍc޽V{Zbrrrrl֦ /]a): KJ>!!ye˔Q/ ݸ̚<>d@O=L*]a)&9S][d{|}Hԟ}QQ%Dz>}0zh4nO>$^u޽e~! tGAAaҥ8p_-!""*Ne˖Ÿqp%sVc8,صkW8ܹSA tKԓ))%vNZΝ[c~Eti\"+ S:d{vF̛)Iԩ0vtW9s[K4Wr d!<<k׮Oµk_FIVSfMWZ%:ٗcذaw׮]o?ہM+RȩSXYaFe+; +H+,_/]AZt,]a)$$z=fWZZZz\X^\×Jebɸr OnXC GGGݏ}%8p@Q3<<W^d@&%]aiO*-3g*ozmXU6mR/M&#]a)8X 6}$8غUPnUn^+((… u?,\xQ:j1rHcƌpwwN*v;wɓruYDTrժU ?t?u?&\tYVZqҥn..X;:uM9"]BZ8:!Eժ%fj9V~698?IAz5t cW ?k׮\Fz;w*OAӦMqY,[ 9qƺwݺhر(UJ߿z~%Dݺꆲ$'WHؼcqH[m͝;@P.]BZԨ%f))…MUӦF ,^ t c |hNN^z… hC\c옏3 lٲ1c u?.lGgЫf7n@P*ҳ/²5_ԱJPL 0sũ(xTB0{6pt _Z?|}}ѹsgZ c˗1| """*&}A6mt?}t?&Q^t=^%L9wHؤV!g'',4 Fz""yӄd{:uƎtKH +,EFs AAAz~~>݋{3ϠI&Fj *B \.1336mXYGľ}0yd9} 0@c۷CDD]vxQzk@b"PtuڏiLq8, sHǘ>dqLn(<(]bcy,Y99|hV>8P*ٵ 0ӍxƌQS}~Y]SFIFk知KSԘ1ώ7vb"""")Sus%]GDx'u;^vv6u;ِaC K6֬H?__NcWT#ɤ2``V jTys K[_DdC/?~<ʗ//ADDDdU=zxH^Ddz>Ub+n(W,]b9mTdc""s$ҢC`x KjH ]BZkL(]au5L.!`C/~ ggg""""{蕚 ##Cc)SF!٨A^t8l%]"./?nn;V*td (~{;KO.!-^{MRXZ$t 6lt jI6^Dd )))vpppxd&Lڷk~t͛q{R?*UҩȆ3nti5jz:H֬ KHÁ=+,W"q%bQ^=""""]UPA4u\Sc09F l.]!Rt4ߚ5E:Fz.:ف͛o NUɖ-٦ɓ-+,mݪ#Tb^s'st\2!<<\c=䓺섓0mPtYn.jptI/(YQ%]ߟOuQ~>W/HV@fi{KH+ [WҺuT"!%jBfŋ1f.yHDDD6/33SQvv6"""t;^K}F uCHKKoΞ-1>>^5kTdGRSE9d{TQC ggt׹s%EŊE!|_~%ƍg}+VDFPPLq`ܸqhذ!ʗ/&MUVxgO =6o\,=Dd|z-$$DcUPC/zΝ1c+,EDs窛vo͞"X44 Kȑ"#Ր"!Ah8QYlU6t(`Lb(`$KFVЪU~O^^ݻwcҤI}Ϸ~;Z3բwTTW.BT+VPĮwE &` {Î{DEPDqH&Rso-ֻ8{q'55U9&&&r?6)) J^bŊPb#=zÇx%իWLܸq7nIn:%$HU$/LBÇղ>zDC+\$ÇrONN}wɓ8TMXZۛwO7mߡ~zLN aZA88cjnIr~ [͕̚DEUH:::J.kӫL2jF(]vhݺ5tJ]vaܸq#55U/nnnXjhY\]]dtaL8?|/_^dׯw.R:B:(Y^XLB۷o㉒{[nݘ"n * |w~~f 1[*U aHct%Lz|F^Ν$ǵk90q"; )aÄ>N$͛I.#$6W(JWW˗/'I5Fe˖}8p N>;w`ڵȵv%' ---iSBSj_~"Bp(M\1 ?_KK `h`0\\y :;QԬY@ƼS:rDx"itaC)9~\Z3eF֪iڴ)*TPQ `ڴiƓ'O7600@ƍaccSSS܎;"++ @``{T+WZBrT' 7ƤIG{>DVV,\/^f͚z{ӧOEHgϞ!%%Y5j0E!w!b @yR8ڀpP*{de;w 3)lmyabs8vR5v F%0x'dg ˙NC ̘{;I{1eg;$ 8<?ǏARj*SSՅ!uuaT `Z*. +33T(]&&k6?}oވS_ ;w杄h8jzizQK"7o"::woB2001qD*GѣǷ?~V¦M iԨ/_{{{ޜEԲeKl۟CCCkILW\uf;k^ ;B4\fdS:u#]4i7ZCKK SNeR1K\PF cnԪ;R>}7*UƌaHLMs.(wv$% {ƙuNC dɜ1M=́ x&":^ (a_0 +ssԫRMQLFFӁx뱐 l",ڤ 4DtҼ#<`ԩx)b)ccc\znnn 7rSF _Ǐ/FݻwǍ7ЩS'lxB صkFsfffbʔ)6VZm_CBBrՄ>(RA`B#+ݻǬ^^Р_t#JSG,%?߸:y|U)S`bl(Ԩ!H`J[I"VR^ q=BefeƳgpݶ Əe>oڅn1kx@ZFB>}±7p.?Vu<"cc?A{֬y'!^>>>~&N;KKK:uJeYV^-3c]]ZZZb޽(Vhyx5kε\%lԨZjsGo1GQ>}M%7---k׎Y=BHѵo>{`ѢE"M`$)d}+4)^DU%˗qujto?i(j8:N!{` FO3BCww,+DXc~<4৯_s ',CKGGw/YRI`L!&#$DCQK4nӴiSޑH.Z|6lyƏɓ'k 37 NNNS*W}" !Eff&zk׆%zĉ1b% Z!yߟw YO%%NR(QqqpZ^%u_gO` )dx!,; QDn/N!k`:,' |QffqK߱5F@Iu$ j.ȑTD` v- D3PK ZZZ011GWJ#ݻwԩJƓ> |۰accc$ȑ#ҒX9sϞ=cZsR&8̜97ofZWWW?.!pq|wз/$:q_Ta\nIY˜wGқH3}֝;8vLS11c~9z{ˠA(idN>uK5!q065a= //o?W\AGG>DEi&n~/{yիWs=-C) U-[2I|8r֭˼;Znͼ.!77ʊw=@"61X]^zz쬒6TTw Y{/NAT;{sx /!9˽a=|8V9f⹺֪ 'ӧy ^&88wRWQϥ,۹f͚*NBQW݃ӚCռ Odgg̙3hٲ%~'2?ѣ1{lu T,:ƍ,M۰"#1oԮXQ" )UJSŋN#9Y3#I"&f<[h2q"@CCaLF[JkV#Qk yD_"***c˗WqB:ƴiӘ/!D߿amm D>}i&jլ)\H`J[Iv(9ã^*=`DTTTz5; QDT)֬{y-Z8:b-y*_^zSu@P$DPK!%%w BH>򛍩, !ׯ_gZwިVӚgbhԨ*Us犺 A֭GG)d{X!4$"1%VDvv5 |3BCww "w&M3yxx>NB##3˽S6Um Sye&DN)hٷowTR*LBQGɘ+5 ! HIIA\\|/_ << ‹/š6mݡMgEWD*^:6MX⌳6ǏJ՘ڧgc(u&̰ڽw_63fHk<".]FΝ ֮fJ䝆G@` ZLC CÆ?NS[>\^OO`i-H 54D@@:;!oY$}@$`ɒ%xӚ?#6mʴ&!D.\@Nx!֮]QFBQ&1u*`fLp\Gt5' IDATR5*.#F . .(9;IaLMЌZ3t:u$9nmLx!5>΅ɘأ8PU./¾qS5DiB 1z#'X|9R[˝R4o"0kШvT]BYkalh(1h֌w Yǎ>>SE99-[N!))H2231sl|Vs@QKb̙W޼y;!D#BPRR6"VP>}ЦM5 !D^FFFXbnܸڵkCHWWB)d\n۶'١.ҫ+P ֬FR85N杂OooL۰!@Z Tij$j^j&-- DNPfMxxxEtB{%(bܹxӚ`ZB䡧qט6mthy-"5Jnn@⼓HNGTz[/^`ѣJ0+Q+&L`HM VjσE cJJ{ ޽; #G }\lܺ; Q3R_ܹsQ|y.\P҄6xG cڵΞ=Uv1!DaҤIx6mڄrDHުU.(KITz5>,c2rD.UQ*5VL_1E&kAA.L${`YnQ+'8aL~; Q#/JMMűc???jr"#88!!!D||D (N7$9rT>xhlm6E кn|gm&hbm AC׮aۙ3rŒP`߾˜Ui|=|l,.Vw"a˗رclق(qQ{aĈxYǍaaaDFF"#/\o޼֭['Y.pB͛7}%-7AHo>lذy*U`ŊB?|ӧc]6u놾}y4ۃ3&ݻ8v 07 bZ{@xլ ~bHC99 EIaNBS)Sשy'q:5|8:hW>zn^_3x)el1aq7eh(ƭ\}5a \,rpASGNB$Lw(%%DNPvm,_^Hٿ?ڶm+*Q&Lg"11x glٲVVVԩSqEUAi }0## cǎe^W[[;wD%&ܼxhٲ%*U}w,B T;}癕{?W69;CG.9BӼSE5kN!nW a~{TTijvgeK8Wc QIGNA$fzϱk.l޼_~9sٳhӦ 4hҥKT 111Ell,"""p-ݻQѻwo$%%1=k,iӆy]BGhh(-[?]vŌ3о}{ޱ)cc@B4`w%Ah)9`vhTR5sTL 44aI:ssY3iHa 3Rl.-E?!uꄱ]ajO U,-mLL c<<08XsN߸͛jk cjt -;T*+K坆H5DǏK%Hݻwcݼ0; <ϊ+lx8Fa@@.]uVZUVywpp<Xd :v(V4Bǣk׮M4E%‰'p nK,Avx"*KVV"\m(^~V,VURQ0oj9*ilIrkT?4)".NS@ {ˆΝQHsZkkY.\ aQZq^~N鎩2"^ M4AӦMhޱl߾kiiDž?y߿!7ggg;vy]---l߾UHVQUA4ށPvmlٲwBS ֭bc lYɩJ~\bHraSp0ft^$ӡ ^JyFx]hkv.٘;v6mE;B޾VrDhyC}!A*za``-Ze˖UahhTTÇxΞ=0Fs~ |!6_nܸqG!Z k֬[cegg@JDGGcر8y$v؁"LCQQu ر<Е#R45r]\WG7C)=`VQ/שÇy'!y3w(K㲇:ϝ;/_rׯ#(<˕?$)Qί'O//Y+yRP\\87 5~x,]'33!9KBpt.(Ki3g}s6gDaիpASRݨJܲe52k~ر˜xv7~<$DUHڻw/y!M67a/͛zkkkhiizǺYj,dgB|}}1rHdee1]re8p $D4ocR+>>HMMEdd$m% 39Waz 8uQhUWW`ta)8ؿ_C1nJ$$'+un-Z6mA1$9TE gx'q0zD42©%Kld(c~=Æ<..B㫐[f"^r7֯_/ׯ|rܿB˫"Hڅ 0ddff2maaӧOܜymB_%J@&MTrt~o˗q^%(ѣSy QHӥQZm0Y\IP4%F''j\ 3OMFtТ4s23;1պ54jǬD Y:IJ_I:u @u*< %m; Hw) aeeSË%ڵ /FFPBm[nů;!̭[ЧO<(QΜ9CKB5 v»wS*{IHH@Ϟ=q T \P`Z (Ca_eVO|XPErS˜SBB[B t VŦiD}ҥdn.NIiD`Fij$*'W9iʕ+QN4m^^^_ưvڰ`޼y>޿+W`ԨQЖ.BǏѵkW$$$0mhhǏqkB`…x]>}9.)) z+)-EH^5fBVx8]cWDGZթqݺ)U^=I!%>E3S$"Bx D- CV}YSz H` [I't:֗/_Ɛ!C`eeӧŋcFXh^|ϟŋR X&==dž 0rH$%%D!.] ::ymٳmi BxBKUƗ/_гgOʼnzB5w YAA5ߖ^~,N߹TI==l>ZZ,t;7oի_y'!h8w Yo+WJg9O5fǡW{`ր#+{`*F"􊎎ׯ:`߾},5Ċ ƍk׮˗X`j֬ @X~;m&J4Hvv6h"ڢdɒhҤ &M;v ##wDB[hh(:uO"l2͛7'ڬ5qU4h@~cƌ>!L ;{['ظQr AJ#rg͛I"~IWR0&BX4 Ay^W`왰ԡWq#Wvv60`XZZbx)Xy244qqDDD`ӦMV.wSuԉCBBŋ1vXXYYiӦXp!;!(ӧOûwD鉑#GRBզMܹs3f F Q GGw)9s]]䲆u+Wl.L(Б }x ;VI(%/{N{FI##5 03FfPիΝ@v6$DL+""˗/G5`oo+mmmn6m§O~tzzz>v*JH[QQQXbjժ;;;lٲ?~v;vļyo>ܾ}䘘B.] P;-[)SRB K__8y$J,)9̙0Qjœ׉DpD=slBqqBpDQ..qRrp0j&0{œ9Rrp D4镕sΡo߾PΝ ޱTNx=_qơDr?r6d,]+Wƌ3oʔ)ggg\xŋ1h 4oBHbccǏR3g( !DG ׎3󺄈B__lj; `/DZwoƆA%]]Z4$923I\]rxȑ %!ػ7t^/t Ɣg0 Zt)W.]#x f͚+++jrl"cǎvpssC¿([,֬Y\:t.EHQ3qqqܹ3޽+J}''',ZHڄ‚ _.Cʕ+" ++͍w LYK#!J)SF¶R /y'!07Ɣ>$9=D55E ּHӫT)ὯxqY, $D$NDD;.\mXU !ѧOݻ7?~)SАSBBQNbb"z;wRԨQXr( !+(-ŒjR0㋓Yt(D3UFD(&fM%%ʕH{#k3 (ɩbj\r0޾坄L#^R[PWW;vɓs=^zufqQӋHYFF뇹s"++KѣqasJG!KJJBqU8~l޼ZZZ'֪W___3/]Dz;V姍0A;G˖ *fڶ&oU(I"Z_wB"2wѧukh36[ea"4pwDcid+88w’BCC燥Kf͚1;ݻgj,GGG~{{{{lܸ.BZrr2w˗/R~¶mۘ 'U3֮]˼&!' @5JJ5Wয়}y1$%NBѫ0p =:LLD-(KUAԽ;0t(0,z%,; Fns1 c~ }-[e˖eVVΞ=7~{---xzzҾ]}ҥKҥ G5o<!.<<|'B6,Gs]|ADt 0ǎ~Æ qIRBxpuueZ͛+ӚD2e A#F($˜$GB缓E*%4) y'ɑ$ $V7|aS))}MIlY'kzEGG#::y]o^x9šyH%)xDdg+8FH>gff֭[A!ddd`4g IDAT8z(4h???RBx5z8Owuec#\Pf%_ְaŊڧ8DjeL)%2XxwkkI!%_+WoN"Yf%JD%$'3*U7~V޼ᝄ0qM@jԨݬ.OXx}M-9?/_xzzzͫy-{ܹSR_O !D2331b:tHjٳgajj*J}BIKK #F`Z֭[LR&N!CVBak L;w&Ed$$D-[NNS {Mi(+ssfSR 5cۚ  W֤I;v Shf>zqΎ\DV:|0.] }'HMMŁ |7\kTRyx"222[Qzz: l?zQQQf$%%Ȩu-eбcGBĕ#Gb޽ԯQ.^2eʈRBk׮psscV/ Y-BgaH7)eԩ,ޑݡ/Vy.U kՓѫ02Ϟ KN(pн;;IW 3\D70`V+95Yo~Qxҍ큁uYth\KGG=z@=c_{Abb51c hBĤ(t/_^BBB憿K,prrkײeгgOUTXhh(֯_S[#%%ƕ+W.ϣzGӧѷoBG`` ߿qOOO :NA'I&aݢԯ^:.]_'_>ŤÇmF"DeƏ.(_; >8? ?LUdZ;G /\C cIq+-`v`dD4ՠ~ظq#ñi&/ԟ:u -[D&MU!T9pwwСC!Ǐk׮شi\CvsB-צM|Ϙ1K,ZBBQnoMÇysqq͛R4DNA4ɜ9@ÆS~z1h҄w Y N!9Ɔjed =#Y=΀&8!ЊDD7nƍׯGAFX߿c֬Y8p Qn]&͛7G\\q}8rw^8ph޼9V%KbrC>>hݺ\GQR%̝;"'%D34jǏG~6qy4nݺزeKc6!$/...Xz(˔)sΡjժ')c"V-2!X[KoL}\)BD~UHoL} xzoN" ,^ S lg)TP$L200@~Я_? {w6 |˗/;;;;hF7m8}t :T,rl۱cG|>>>8r߿oǵQR%ԫWvvvݻ7*VoaÆVZXx1.^߮)ڴicǢ[n=?ٳ ]nmLMMq=&,HQyaٲeԶj׮-J}B%K2W7BԎ-0e v-$DSl 89"ХP?_MAKf̜+x',XqJ%0YX1fո10k_9'33'N'PvmL4 K_ω'sssM [###9#Gh7v7oǏ~!=BWWMh3!jk577lllDO!ؘid,EKz ah݁H`>Ir| _̜ 0~o * N;I@`:yR4\dggC^й3%,-(AA59@R9 caas"88GEN 8::|0a=z$s<22ٌ7/???\pϿ"DLLLd211)E!߰xbQj*U gΜA=ڻR1GM/Fx dHK)d;vYYE ;\ݺlʬ*^0hгjY{[Q+:::իΝ;/^`ԩr-ÑM6aÆŁ\Ϫ{{{tۏڄBHQ_U%KĹsh(!@R1U_!D6䝂h3}&oLJw (ggE )d8QdgʾiLhZϙ3S9PKN5kիlذA;1h o߾>&&&I#BHZ 3g?f͚RBMBBzg"9@SMT;}??)\]jxup,*z r2LQϹ <;)5 &LǏqezzz>/11_|ؓ'Od!OOOL6Mŋɓ'ѦMQB:bbG!cb"\P64䝄h~Ɣ^?//CI"r2-@@$*uS-˴ )SS>Mztۼ|PK ڵ޾} /{y"""A!5[l( q kN>}Ĵ"EpAVTޘXxwJ&@P$*üeiɴ^XY 3S$6Xc+#5(W-Zw;++ ]v'>p{)\B!(ڶmƏlǡCСC !D1DM/RdjL;$͛ӧN!`|QƍYxxxB##myԯ/Oԇ\PӋ!=== 0W^ŋ/0uT|}NNN([,6m ˗:'M!I;vرcż>|}}ѵkW !D23ʊi=B$G``)&qp BV` n; QDQS Tl,$|i===Ԑž;cN!`jzVZXz5>|+WZeee! -BڵQ~},XyoƵkXD'B47ƌ#JKOODݙ&Mj_T^i=B$oHK)&[7)dݺddNB1hг'nx' WckkgZSa}N!}a߸TIȿJ*ggg899ܹsXnN<)E'Oɓ'Xx1*V޽{W^hӦ ?ƀDYB4СCɼvލRrG!ʛTJ*3g_T=D++RX^Jl!_ׇ.h&&j1 DF 8~03 ᝄ(QS7oNiaL ; sBCqk5[֮ʹ&M49aLImvcFM/B.]ХK`ƍغu+{xzzƨX"222$ʝB!ݢ:::صk 6!hW2fy"1xۯ7_ xחY Փ,WWa7ox'ɱ?`nNՕ07c-)nTka^Si{缓+ !UTnݺ5s~^B9AƎ;0 ދ^H+YR'~* xyNBad$%y'ɑ*,p0Q!aLIiVvz:};p6$|ĮוL/TҼcqwjzqe``~wƍCCCޱ!si 4kkiiaÆ :t(ڄiôf-#DT*\_ի5[ȯR%鍩` (w++aƗ01ӦreX3L2lgm 7Wx')$4*&M`ӦMǪUPreޑ!pYOHacY---[ƍc^B4ôf#D-5o.,&%>@D$DfN!+, Xwsy#.-5g@vdn]%%{_h($E5$TRprrBpp0Ο?ݻCKKw,B!D-?{FJJ ZZZĉ&M?iM333Hq Bxpp BLxI":wFBV` v0K;;`)dyxzѼ($+;==)7ڶe^ xZDENRdQKaooǏիW3gL-!"q׮]C>}Dix)SRB4ю;35۷om)-kCo"4MaU%*0x0Уnx'!0Ӈw Y Ɖ:ؖۇ/_2[JԮXy]Q3з/?LJ❤Hoj˖-ûwaԖBk׮HLL%K0gQjB&… ׵g^7}ܡ?x{NA5u*ЪN睂(j$M)d;;Es)Jm/m_:N!ES^j&LӧOq!TZw$B!Drnܸ$$$R~6&|>J^B5gy rsjB"Sw Y>>B^ CYkϗGNQPK ikkӧO1vXq!ɸu~Gċo̙31|QjBڷoünϞ=annμ.!H\$$9RS%x'!00.(KitaLݹ; Q0,,x'ɑ!,z$z.s"Z=GvK)[gi;Il`.UIjz1CCCxyy~Bڵh iӦ᯿6!h7obܸq1b(u * $:X杄(|y鍩X`Z 0wKKI%Ã7_L y#N uIoa!L$GBq#9$E541l01!n>|{{{DGGR +V6!D3#U 7?Ç e˖E.]%$EJQ&̙S <<y'!h;w Y?"n]%l MD:*Y uvϟE;GvP\9DZr)V޿睤HXz5LLLx BTǰׯ_E?f\Rڄ kkkxyy!33wѹsgƊR6!K`H)d sDz "ƌB֛70 zh&]j XBB1%r4:̜"żERcêA@4+h%c5ֱ^{0&$73KL,b&F{FEQ"Hc%9٣&(>x+/p8;ך,]a_%^Dr0lADD.^Ν;#>>>_裏wQ6DFFbxk.X`q{CͧW\ya~6t(гtѱcK%C S.P @S59zduSVt 3/G!C0wÅ yѠZ51@_S۹X1Ν.)t8""""p k]vŖ-[xl޼'Zn_8~8233{_s=QFѣÿ{=Bqcݓ̟ܺҭ` ?|Ԙ` *Jtԭy  8wW˝;ԍZ(+ ?ݸak4<>qo+U ;z \\ EԮf,\ugLDDDvڵkuyT IDATp7ʔ)#ALxBBBpqqAӦMѢE h͚5CժUP?G_{ 6lg QAyum*WŋsmHӮZS}$]bቛt T6Iꮽ|`pz)jAmuN'}=8E==Q %]\0=iiILDt|@|?Qti899YYYHKKjݻw8TDo݁QKHDŽ jM}tjM+]B:ƎUCK잃Mɽ{K5Jkuw\5 D$&&Jg䙿xץ3  0szY!EϞ%bQC 3mmZS/$]B:]S[V_^SN1ٳΧNIp\+"@DDDDDD[>>>x3΀!]b ] 9"]B:9Z^^%6YYuj MOK-'GG,<eUJW9#]`8"""""""SpttIJeA:奆frt)pt Tk*gNd 0Pt.-]`˖ä^ŞlY5*YR&#CqEDDDDDD+͛7cҤI)DϮ ?3KHG[SDEoo_ _|Q:żj0)HTݺuc)DԾ=0~tQx8$$H6m@988`R^> *T1ٖT(pEDDDDDDb'NE<0 VRSKHG>AOtQ]2?X6e 9;K؏n݀å+(pEDDDDDDbŊ/~z49 DLk']a l,]AƎ:tx_H b8r%7n,cFP/kzQqqq.^9T\. ]@FAAgIW.__Q# O> V cuÅիֈ($dfTЋ(jժwww S+RF˗/G2e99"]A:,LdeK.??+lV`oKT6Eoo888H+^(&+c̙<+/Y,@C/"""""\۷/nݺ}aN2bŊ^éSn:T\Y: ZZt)pt Tt@r2|9pt psSkX1` yk z5 ۶Eg4U<<7t(m؀%'tRS3tiQprrBǎ"::ĬYP^=4+Vo7n`͚5xHRL 0>%]B:WC 3sGHQLbc /77C~=>.Ŋa@۶;q#5 ^nnY[j=E¡QsttDv . <<˖-CQct2doߎ̛7wMx8$$H6m+""Ԑ".Nt20etjFvFŊ'#htF-v놭b?Ag˖p,KiS`l !ݛNDDDDDT8U^&M¤ISNСCow!))I:Q+zs(^tYh(r%0mbだ gնtӧ%JHPNꥆ%6?Ϝ9(UJr{w8ǣ h Q}P 2d*S5*VDڵѡqcҤ jV$EХo& 7o͛cFHH>'ODJJpUX[FVкuk4iEcʉ`3FtH</޽%qFV.;{jDArj{j_p?\p M1q[ҨX<+_ WG5РZ509wd4tZS.]B^Kg`0dHnS֧GHЋL>>>DDF>} ߻w_ܼy񈏏GBBC$''###Vwޅ+dɒ(Z(QT)B 򂇇<==QbExyy,._YYur5O]TySDZOOm[TdtƍTX2_ԵkZ?5M7mRΝYX,Y@Xt H2eаaC4lP:(w?T֕rs]PNOQWժa$$5.]B:*WVC 3{XrEtTd?Qzح[ ǏܹsdN$""""""7nٟ-9f 11%iSuƗܼ%IG(:ZhѰ!_&$@pfϞYYYO}\%зo_L6 /RV3g%KJPNڸQ%`r5<1vy=]qqu%6aa9sek(:vTkjjWŋ@L[0Ro#g^P\xBTT>| $''۷oGƍ|>|Mk`ƍظq#zTZ5oC@ĉt ȑ=%6!!jMM8:JPN .(tѣ<ɓ>d_Vv.9yR)SEk(Wkjvg^-Z :u iiiے?O;wXf DDDDDDDc:~@1kR?.]bkZSCHi#GKl PC9?'SK٣~O)]B:&LP[4:$]^Ξ=~!G߳n:/߿`ٲeyDDDDDDDd6HW.^]hf`> Ԭ)]a-]A|}:u+l1]3~~@4h"Ϟ˘"""""""SHIVΞ.!J ʥJI<|Zt (QB]P.W.G?-tW#e_Ѣj@.]b Yc)QkBg ^ŋg˗ׯ|||~=+/XV3W\ɳ$"""""",\DDH?3IHpQJה5R~u.t)rR%wO˗KHZSNlٲǍ7L2bg>)SѬ:LnTkmѤ #]aTtt h|`| *JtԯvW.hѢO~q%̚5 %Jxc/^ 5))]4u놑#GSN(Vcٳg~-DDDDDDT\,_ܿ/]B:vF0 -KHGǎ1FWK;o']aLMH.!m'JW<8ʅ={ ==;::bѢEضm*~III k:::Ý;wW_aڵػw/_nݺ=BHn%]B: z0:rXVCg`W/ 'իY_d0 U[>|(]B:U^뱯988`Ŋ6m͛ __P>S4im۶>FDDDDDDΝ@LJ&2mТt_[HW)SV++Hĉ3 6m ]:HWjzitN>x =:[ϱrȑ#1bĈ~x !""""""6IW.YS(0ؽ[tY,@:F[*'??A mۀ> ]@F^Ǿֺuk̛7/[?…%K:w 777ٓ!""""""+V<>(.(+']b|1pt (ZT ݥKl23֙GJ"EԚcl ܣGjߐ T$]Q(qرc`8::fԩSQt?|#7onZhhh^Ȯ$$@xt RE]P6wK+WKHGJ[SIIjMIOO5(bŋ%]*ZT1J˙3g /AfͲ9z*U z_V-wet͛?pt h3G(:?_OÆjHa&oSQQ%AugĨS%N 4>իW655r9ͿlٲcbbDDDDDDDv+, X wOtt ds dOqFԄѶ-0qtQDp!?׫NjL"]Qp̽{"555ۏݺuQ\py͚5sDDDDDDDv-- X 8yRt-.(ϛEed3.!NNjZtͣGjMH.TIj֯.!]f+q襩J*lܺuڵk>|g/9 wŋ+WKHs~~FIIҥ@6L&S0atQr2|9pt pwWEKl<>8^t/.]`8Ժuk{{~g"""in/m׮*Tk>vW/KDDDDDDTP?4j2۷ *Jtx{K<.&X.!u ׯK84i{hѢy!333ۯ1x53|3w*WK;t']aN5`" >^tjL,]at|J(45n~݋M6eK,~e2d"!>>>pvvJ'N K}+NRƥIPaq b"]B:zrxC;^mut QK#FeZFFzEܹs'՚7n\]]H/} _Z*N$""""""*v+Hĉ#n^ or&z%+Hט1@ǎF96Md8ʅ3gTRaƌpwwG:uPJHJJ:;;cƌ4EFpȑǾ'Ndu򜛛t֭_JW.__((صKt4l(]a'JW.I ;ۥ+ ^PBO^ZZ._(CM'L?OԮ]q){geeg_(ϼ_5ѣ%I <=Kl=Rk*$DtY,@J6V+aCT"]aaptQ+N>}UV c_駟ZBZ[oƍ|{EtttHҀtQRt)&]B:*TPVr2|9pt pw|}EKlYL.HrԚrq.IIQgƝ=+]BLo6mB޽s\KƶmPlYGcܹst6XVV>6"""""""mfr6ܼ)]B:40ߚ"#KHGݺ[S%[ $.X.!7z|s+WKO8|mڴ֯_@--; A<<" O:ZI~+ΜQ[H޽+ΟW[>x ]B:z 0tImɚ,]B8cq!|5kիyzzwزe N>&M[Mn@DDDDDDf1i7滽{M+H@F7JW1c+6lrp ȑ@׮F!!ڵ@Vt Qr(ڶmmŝ;wPD =UBԨQ㱯)SbŊ|~TT 5jԅ IDAT@͚5DDDDDDDb,`,mgOajΞ.vw#eZS%6۷滻g,MٹS)H z'''TX+V˗q ''59sGF֬ʗg% _T۶5SQQ%665e){Ԛ.ٴINK7$""""""WR88Hܿ|:쏇,]bb2]}j"]bΌ3]}K5UtÇl.!z?I]3`| 2Rtԭk5 ,X\.]B:ͷEue_[S @xt QqGۘ?>KܻwO:ȼڵ&L0vM]P.!['KW]Tlt h6M 573͚m$* 7D8ʥ˗/{R 9s7ٳѣGxyyҩDDDDDDDԯпtљ3ʕ@jt &550iҤ,#""""""3ǫd~`& 5f +F~ _X%c(K `Z +Kt t.]atf )]BC/M5o̜)]aoK_̑0VKC/M9z|-жm|!""""""* 0:w#=ٟ=C+.^>@mI[7` 0`2=ѹ30ztѕ+jM%%I!4UP![srr€! cW^0[`zj.!F]HWkYY%cp{w #GԚ̔.!C!;^ K.45lߛ}+NRƥI+Oxxxoȑ#޾}PfM=gϞͷՊ{/ߞtQۈɎ@Pt9Smwh&l*]ANU۲@`t4 hFh^`f "z];w{iiiXf 5joooL>}ѣ\vzz:M\?]XU+6m ] PtQ` ][hV5"dKW!=0'{??~uؾ};Oʕ+@@@XbX"ʕ+WWW\rpr5qqq8|0\""""""z-.͜ P:uk 58;;ÛGF?#]B:6fϖ0y?uKt4nlA*֒Z[d'52{*2RQzijРtVF/˖w5J(,Lݝc)g^y;V(<\ݙ(]B:ڵ&L0vM)lI47ADDDDDDDt&]a꜓L1t(гtѱcyp 4G)`*u~ٟ~ԇ+Vs. ;(_<|_ ݃oމKD4-::{8gdLuֱcO瀇0dt :SL+|w7RLKݫ֔n0A7%65ePK^DDDD/_756I(wd^5\oش (_Yt¤KleHc!ŋ%6AAjMFE ϝ. RF*&ooo+VL:T)5(]Z&-Mmvt pqQ]]%#X\w5R9;~~.PT۱~t ppPKlV`z!zFp DDDDDDD4ժ f,Y\*]B:*WV$) X\wQyy5%uWܿ|pt P9;K$'}dРA3ӭ&n:\R:RՑ#GdɒB5DDFEF;Qa/Ξx9Mنߺy1C P!+QQ6nmTQxqߨ\QyrM///s;wٳ]bc ksC\^Ov Hge[hhtѯ:H80TޙCyC) WyqMQ^KKM no z>t=3xW.pd͛7ǔ)S3*117n!C]Ȧ#DDyvx~} #G۷+Ud+Փ0dyQlY'8~\[7kIsmujtԽt ' ơW.ԬYŊCZZSSrei?<ʕ+<<2==111ؿ?..]K.DDOuEkܹhԨ`o^DTxq ѣ95f^U:ODfp*Х гtܼy08p jժ%XVZSI\ t t 30W/]b.7rKz傓ԩ3g<=777,_G"jŲe0m4deek5bQk ㏁ͥk(??bckL`ZhRrUk*&FF֭֭݁kH/k,wZ jMk']C^ԠAdž^%J L<ׯ_& +ZQtҀ #''Uה3P3.=##"~UQ//{*WV9sKl-ԖOd_l=`RjזTfRCL3HN/WC njlo TJ GjMqC#^.=\3fl5k,,\=*&"""6p 0rtK`" Ly kH_fz_Fx Ĥѣ#5j,ʆƍ.Vk7@KÆ{I?*W__ߖ.sǶ^rvmugoH 5Ut 31b@ 3i.!}JW=KI.!zCHW>39Ytt .]at2lpt  jűcǰh"khӦ jժ*U ^^^miӀ-+زEtM j%]ag(]A&Nڶ0 6m ]:HW<l ]AF:w0;`zj.!#FݺIWzsڣG%d8Gʕ+E1c֭[ÇիBbb"bbbpl?ӧ /`ǎXODDDDT|}5+ݻ+Hԩ#]ae}40 >LtIWm|t3xE ;mۤ+H̙@Fv?^ ..'ND:uyxxqJJ 3gΠo߾hݺ5.\gODDDDTJP˕.IOW[?.]B:U/wwL`QZS*Hde!!%x9 ظ8tHtUHWm8 ]A|}ե+6m +QFg̚50 K/>V2LDDDDJuAL.\.!*oMݻΣ .!jM99I$'ss.^.!nnrѢ%6~?/]B:ʕS%KlRSՙqOKRԟ}KKؤV'OJq蕇֭[nݺ֭[YYY{c_OMM믿!C $=j0W$Ӱl&o@Tt _|k*&F.!uo@ ,XDDH5ͷ぀5QIb"d pt ^yd=zt"E`xQmݺݺu󭁈(tc7+W_3:IW %m[`D `B .NtjL"]at|J-ӥ+"#Հ>&Ft4m ̞-]atZSoK q9^{ YYYZQxǾ7_oDDDDDfW/ 'իY_d0 V>.!}ue&gϪ-RRKHG^?3'%{w`p KԖ|}9R(,LmG$]B&áW.bذa(}WGGǾ3fhQ2EL+Hĉ3 V}7NIh&7JWѣΝ+֭x}1U($XxHt !]at(@_'s+-ZBXbҥKuV*""""<`NdV/+HРtѶm]Wg&||t3Gqi&;vUdf50ڵ  ]Ӧ-[JW}et^? G[o͘1+ """"P|Kddw'=*]B:QC OOG5k;>Y,@J6V+~CT"]aq#ptXե+6m ] -]ae {t^&8mڴyn;#PDDDDD*URC 3IJRgI ԅ"&pr:ETѢ%6Ο.!ʩ;SK.IMUgƝ>-]B:JRJ.IKSt pqQkU&=]msxt M_^:%K)]B:Q$6X.!5koM\SZ5TBZS%re󭩻w՛Ӯ\.!aziAx 3'%%a^&""""{֡0ntQxP ]B:ڶ&N0-KHGV)Fׯ5+]B:ZPgIdzGLt hLe&7o5ut h0Vk*:ZqoţG~1ԩ&L[0`Ӷ={|׷oߞ/GDDDDT`0 U[I}պ23g+az0:^mut 6L%%%kW`H 0uwNRt 3FU`b 1Q8Iثcǎ={_VZ?~ڵm[lį9r$O_?{wU{! IDAT`n(\TMKrռ.2d{ݺR.i*Z{(\Pr Qm:̙~xs>z^gf<߇DV:$ln|y`p$ȑj́IWsNBzS{J'1;tHͩ1cGl@7tSvI'1 VsjX׽ R*wNbvڏi8H4dy*(H:YhSkM\Sc63g{m/h޼9ƎkL+W~㉉f\""""r>>@)mۤS^&%Bk&`哑l p5ejMg$۶?O'y^?6H ƍS~d. @: `K؇?I&͚5u$/^FDDDDUbE$f+W;>L@ժ)rrի,c2իK7֪FԪ%Bk:`)B+ عS:e2uJZX)_S:;;?K.mHH~Bۛ`EJKu!}IMU{<#SFjsڋY$G2/Pt4`R)$*RH'1wX8qB: Q:O)#,#C;zT: Q<*,+Kݜ }K/wެY3ԨQC(ؿ}DDDDH4Pj$!=NBzԭ.(IbS4Bvxsu``SZꂲܼ ̛p[ TTRp!%Zxsm5""P>aKt/駟7'nVKDDDDm[`hZ.s i0XUxى-9F̚ d7W{|jNK'!=7V{ɕ+'//ԜtI: tzpW~巌 lxf%RVNuj!&x5ZgΨVwH'!=w Nujɚ":T:Vd$hZQAC`ZQQjٟv퀑#ShEG襓=ϟ?/D˗CFR_d~`Z׈:|-#:T*$8Xͩl$ǠAj$!!j,$G@Ϟ)VP{}[:VXlړ^:i~?{,;&e׮]9sSz|JCDDDD$dRTd&`V哑l l,2hT:VP(8Nc~t kxukZv)H1cTKr#ٻNAyE/<<<zlƌY6n܈}O|#""""S&<(rrի O>>7<[8p@: Ԭ)BسG:e2);S^&Pt UA쓯/Рt mۤSPaK&M<ؾ}0|pdÒ۔L:o&<;ʔ)癈D"E"IRS^LgJ'!=ʖUYTZ3)$GRjN=e{ˁ'%J F| ptңhQuruNbZ>,(THͩJݿڱK'^.]NBz4hVIB:O=p^:9;;W^O|Nhh(|}}Ѻuk.]...Dݺu1}ts]oDʕϢL2h֬???>}ZW֮]zz5Zg_ӑ4H:Vx8xSO.!)"#En%;ѩ0|t (`ϫ#GJЊS+Tm #B+&Fݜvt0j(DGG#""… FBBRRRlM6UM"""""+#FFzO@.ڔ  t,B+8Xs> <M:VH$+K: 1`гt cǀ+^_dzN,[$$ӻWFrꔚS9!eM⥗^DG@DDDD$g Q#Z7OSF79( NAzMx{KڱX^:5~<кt ]ט1j ['9hN:ڵ)X_|'''Tvm3Z՜(Lt 5kS^@͚)={S^&)BkzS^&Pt ᅲNAzK mNAzLt M-[SXRV0h}駟H"1ʮ-T/.,- XTR!SS..I/W-(.()#,=]ͩP$GѢjN+',3S<|X: *VNbvSIH/ ZU:YNjE~t^6`գG1vmH_UgUK]P67yh$TR0>%VML5d`B 2R: Q'YYIH=Sh;| !K:j߸t$G@߾)NT{I'!=^{ _:֙3;I,z٘7BBB흯}_|1_KDDDDdN6Nl(4 Ocat kxU+Z;w)Hc^N{7/5 hN:ڵ)H#Sh:|-;m򀧧'~g|xgXE;#ԩS'OEDDDDP|}5ShI ||Z?J |}}߰{ԯ/B+0ضM:e2^^)6mlNAzM֭)(X#Θ>}:"""0yd1p@:u _|+f񉈈^RHQtwU[0$Gɒ_IUP$3Ϩ9UtL:a$S+J'1_ͩ`$T*,'XF!d2իKZ8p@:^yRJŋ_s(Z5jO>QQQ."""""kxx/Fr&0o-pw7ޜJJժ_F ,ZDFJ'!=*URB, ,Y;'puU7YqRS^LgJ'!=ʔQsxq$fiiSSͱ.]?k.ܺu 1g7ݻwGfаaCxxxM6K/!CGPPp ]?1hL,B%`, >^: Ѥ 0mt +WٳW?.իfT+gpҋV/#WIVtcGZG+VIH7zNu*ĥK'!=Nuj!&x5ZgΨ;IH݁ASh;ZJ'G`ы&OVdv`ׄ @V)vS^c/$Bkn_:5jЮt uS^#F:H:tXɑNBz  t,B+8X ΖNB`ыL&C:V@)H/__NZ6/2ShmܨdL&K:V` ut ktQ#Z7OSMJ R*2#9%?#yڇJ'!=yFͩr夓ef֙GH'!=bE$fYYjNt\Y:YNk!$IZ8p@: Ԭ)BسG:^i5j@b gggxzz^ /񸻫 5F,X?/VM]P6d`B 2R: QSNNIRRŋ^'d*TPNbb:{V: Q%%ǏG #88qqqHcC#::۶mÄ PF <GDDDDTp4iڨˀptңQ#` ZWfEaCy*.N: QTb"0{6+]xsu`\ &F: Q͛@tt^eddØ?>z-4m+VDQV-4l͚5CVбcGtCr 6m ";effbڵhذ!}dee?:w NV$'K'!=ڷ~[:VtPstWѣSh].(?_26mqShƪ9(h8Q:ŋ@tңys`dZ.ؙM^:-[ĉn:?~V޼y:txbQI233GsHJJҝ0p Уt #G+ $ǛozI:~\ٙ>}}Sh< ,[ܽ+x5u2ӧVt>SIxjɚ*2D:Vd$hptE/իgrrr0|pV~t )))6HFDDDDD VSh l ;x%ZwI FRd>_:juz5#6L7``* ]`uN\ #tڵů)Z(J,+WDPP-b=#lSG:V@*~}QI` }t dR|I` ut k lެ~>M 4m*B+(H>Mx{Kڱ7 bK_~ sAL2 [FF># 7|cq gQEeef6GH'!=Q eeE: _+K'1Q+sNBzLt 5kS^@͚)֭SS>)NQ SB0gΜ'>p5j"##gjժ=g͚5͓.ُv?#IN.T{\Y)''$f))js礓* EH'1KMU{1=+([VNb,]"S*R<#w՜ NBz,TI~I gyR ۇ 믿KƴiӰsNbǎ:t( ڵk-Q#H^f._NBz4lhbjB0{6ptң^=ͩk׀9sk FrS11IH㝧nƛSII@TtE/+?yJ#44/_GZtF!<<ХK4iݺuêUyf.\Xٳg[4>BoKЊ,nݒNBz 0zt huM$G6ر)bbT"1Q: Ѫ0at ??UT%Ӣ0yt KԍDIH&MiӤSh]SWH')0X޽{XtCתU {A*U:Ʋe,:fcT~Lo_ƍd oVO?W99IHaÀΝSh9-<M:VHj%0@` #C:cK}=،3ЬY\H+_jԨ hۻwoCDDDDD29Fl.2>_Fl*1CGh$[7K N6Nu+qt k$Ho6lNAzMڲΝ@@t ,^8qB{r0eʔ\x3,( OXEDDD@ZZ._W"..vp-ܺu IIIAJJ rʔ)pqqAQbETT UTAŊQZ5ԩS'W+ĉ|}՞F٧&+Kݝ\;Zt YKܑNܽ!,_xJTsj>J^^eJORdkV-yg穛7(ʕ󔷷tĢN&l׮]_{- dɒ߯_l*nDDTp%''ѣ8vN<ӧO#<<믿_g}mڴA۶m^^^ (6T?L:YBM{%pvvf l'K'!GѢZE1{tK| IDATTJӐ4M ?s$fWfTji +!:-[V%6mڔ>WYv5o< *T@ƍ1qD]Νף?5j+W"/m`N=ݽkoĔʣPkzo)Bi`2՞O]ekW``Zj%EJtңS'`0Hڷ~[:Vt -$E/,zvߏ˗/ù>N.],Wh=Rb=MNNN> O>X"^xL4 }nWw0|p?Dbbt$"zѣW^NonE/ׂɋb8Y#6L]T&!CTH~Uc2"=H/IaK IN6mdѝ]u|X4?ADDdTgFJ &`˖-qt|>pD||>_F|]l<D1C1͛ Lo)ȑL )H T[V"[;V6ݻ-9E/^x{yk؂>+VD֭-kk֬Ecӧ/RRRz߮89BTH{dg~ {%sׯP;3o\&ʽVS=^Ӧ7˗oNAz5j$ WLMMN`X„ zl—_~7nDorQeNOO: 99YjժA_]"-ʈW/_?ZNKt&.^)iSDTu &B+&F:Y}{S#y`hdC,zY7@z<22w~aֆs={9/^<ޥIDDT۷mڴAllt"#W_Nuf ?{H!K t$a۷99)zNA7DDҧ!j*/^ܦΙ3Ώ:''GĉQZ5 8-.\ȂgΜa;'^:͛1㏑"}6 hD:@`t "2 T[V"[;`k=Z"Ǣׯu=HLj#гgOc?3F777x{{cHLLx/^DhhMt:vk׮IG![9yg/Pt??Lt -U""MkKxXptdՓNAdR|]c^u|(SUt .|ٳgcҥr劮qCBBEDDD!22=z@*/l; ૯p$GrK3H'AKCJVKNbv3ߤ/ʖNw#)H"EԍDI'!GS+K'!+e#:uBHHZh{͛QX1'Ξ=k1 2eʠvhڴ)6m:h~ZlM...qr;V:JBI'!=QE AiF`SC.ɍySZQfTj*p!)\Y# T1H$mznݺ ֭[駟O}Mjbȑ(\p|rKDDhQvmXL 0g*,պ50~<`MDLy4 ;W:Y\0k{@JiHZFK'1zUH{@ժiRFlI^=usGI'!X1'''{F\\~={ q-eˢAhժڶmBlp¨^:<<#""rdKFӦMѢE _>իE8Ŋ'<==ѥK?}6b۶mضmmzܧ;v,ZjOO|=.ST IڌrgOu _uS+֬Nb,YL*%@3Gi'ٗD` $fӧ# lrFx祗cŋXCիWǨQgF…>… t(_?*TE;"""{Vn]+xѢE ԩS'On<ɭg}ݺuCnݰd۷/eXJJ a׮]y~,'6IWEݻpFYoJbULݹS:Yp:O Q*`7mQ<$,4X0n S7M#ҫat^vW^쎇:v숗_~ T" *;cǎᅬk";;;O?bԩSͪ}IHiſ\Io0ySG}mۦZ ?95xtc8uA知ޭ԰aIQۧoK'!=~[}ڷO: DDDD6TX1tsŹspy|W0` ^Y&[;Ϗ7mڴVNBzի!NmA}6 "5 UK:j" X2&?HۥS#ٸ;׌U%n/_?7n`׮]8q"ԩ#j^^^駟9 s+$G2o͖0ӊ1kZ(9uAHhaIQdf6FZHWzPA:YVj"Ev6ZHUNAE>6A;vĴiӤ# TR׿w^cʕٳ'JbI0Lعs'fK,ɳIHb"0{6#TE< J/ʕ,9c*嗚5e#u XNB") XNBzT.(jNEDH'!G ,YK'!=ʕSѹߟ9lɓػw?DDDD6駟b…h߾}2:_~AʕdCϓI''یs"0w*&M'1f*JCiS`Z/fIQ\ԕ+IH/:T:VdZI,j5!ٟvQShEG7oJ'!GÛYV ) Lڵk8t """"ݼ~z*dp[ldrrCBp̊ח }N8ӠA@)V1eZ_}ӥ}}H -ݓNB)5Ҥ=zH(0E/Xft""""tS=ŋq9Ku+qt kys/?#+#|l\n$fDeKZ?_/"@kn){6z4+)S#9pj5|8Щt zUZh3ٹO>6w6 dZ`~Ԫe˲ _0Q`Kf2kKZعS:U Pt ;S^>>@)モNAdf`&״i@&)uu7nܐBDDD[ѢE|}|L2wU S..HQE/ g+ El+F&qŋ eH'1PmNBz89󔫫t,`J $D: QS*I'1Vmݕ/rrj^&.@_EfͰzjddX3z 6`}IGt:0g $fW{" //uϤ%$̙tTjN}d6t횚Sァn Vb"0{:Oլ),nN{]$Z;w`5(W5j&Mm۶h߾=J(! `РA6-zegg#22^^^6 9u)#E>e V R_Aͩ+?,XAOv_K'1S%˕NCz%`(bb z$uUt70i0wt777o1k,˗G>}phDDD$5m|`[UIHAfX9pLkI:vLŭ S߾@)I'!=zNujK&→n5ILn) <̽{ehѢ~WHDDD$]v6͛6 n6`Fĉ@˖=< @V';6nкt ]ט1@۶)S^#G*B`H_`j 'G: 1dеtE/;vQncǎEzzt"""РA; `e\f2uJZرC: モNAz?/Bkf`&Hnild/``ɒ%hӦ \"Y:ul:^J5C.ݻ,_8!(QB]P.SWLrxȎ-ji+$GBjNU$,;[,UNa] :$Q`׮NAzL@Z) $gѥKԩS(^H,֭[HII۷qҥKرch׮<*Uqb"""25jt<*n{NC^0w-+yWDjUufd$fokK!KUԔ)ijߜzӐʕS+&O6ξw_}V7R,5mw=ʗ^xA: Yy7YT0p@899Iyׯصklق>7"":uBHHJ,)HM3g#C/fWiBnV+7CK]Pǫ̙tTjN}tOUԭ.(tD`w5Ӑ OW us̙\5djT+Sg̐NRlfޫR ~'[vqQ:uٳ 4k?}4Ǝ HRRl:w2Hm$d%K%ѶCW#ShEG7oJ'!=ڶƌNׯK'!=ZƏN ̞ `dIShũ}BtңiS`Ml25ڦTP!G+>yWFPPP>#""" 6D6PH;(`%%~}Q+PR-B+, XLIHwo5)B(-2={Hw$Gn)ΝS-Y&>u *(PkڵknݺIǰcǎ1{.L:FiLDDDy{pU2&$g`z CBOVaP6qQV޽t kHH֮NAz t$B!o$ǐ!@.)UԾcd wNQ `UTvm:tUV}oQQQXb@*"""Om܆Zj6X@st zT7Ϫ1+@)6nmNAzd&`״i@)nU*OS͛Kڶ  NAzMl)W#\jհi&*y!w9WF Gv,# 4T: =ŋqي=h X%"V*RT(}Վ58X: e2WLNzZCdWNv-t djՒN-2ڵS8S*[^iժFΝnZDD*Qc['S̟DEI'CΗcڵ &MBdddF ,$"ul=z d6lPnݤ83/y+V0vΞQ** e2 5NI'1۴IzNBzL.(8!lV5xm>/F IDATMѣI̶mSsj$ĉ/$D:?*ou#θqjNK'qްx] 9Ei,^lY;..ΝV,xݘ3gMYu9L`JuG)_Ϟ"+lרv颊+(K=3ڜVV+t.ZR^Eի4jnUC57֪FԪ%Bk:`w\Eg`<"ْԭ+!pqss.@\nP"@izjcGAXX5j0p@#66ϟ1zꅚ5k"11NB {lA%$$/xh߾ƣ 9YPPt~~ȶag?899*/Փ'Fiߜ8)_WO: Y5 c$fw\]xYY8㑑r\? \IO$\ @ypP㏟xƃiij8WW+F⢊'s=`r57NCOq?;/xT"/_V籜wr\|F+ uuus<:nv͛[;巢Eg1c?Xr0...=vU$h֬Y۷o?ҥKx>>(T BY̙t6s*~Uc|8d}Gam8nU+7-UiS7}$$rT/͛#FHк|Yͩx$^Xxڧ당[࿧NYUʍ;,0 jk}8 ˒\?T$`>)NNNxwj[o "ʭcԩ6?)b19~\IONp20/u1 ƊSQ1M>=~8yRc[3gTC K0ib=~Y]> lsTKVķOmJ'xXd$h;RS1j\lWZZHGs(usZRR#2.Z=F[-zߗ-[OIVN.lu'zt|"XnlӵkW6 ݻj)}]5 hΪڵ)-[J':t[]P kBh`)jNeoD!!ʕpј) r-4Z8v XBET`Ο?ݻIQ\rAAAO|Çqĉ|JUpnn)֬NA`AHسVHB ;S }azUP%* f̘1ϟpAHLb|%CFFF[]w#&_SA8xwЫW/KdIuA>ҧNH/6l@hdUc 4EjNiTٳIHeU1xq$fiijϸSAAz@S{[ u?!>(SF:YF7\D _+9LιG+??}o\YmRIl,L gN"Q+W`4pg,ǁF 0猋3!"\ 6}*>fφӧu'n,)x_;S9sUG6]fϞ}ߋ+… ̞ B!,,)SШQ#T´iӸqSԩ-) M47ԝڙ30}:\ۯ-^2[9x0r a2YI;Gݺ0lΟW/N"LĂ*uX 0y2\dx.O? GNa-*Jݧt'nfP o\S.>.e6""" a&}+F63n!441p@ZlIrxBᖢٽ{73gΤ[n+W+2j(Թիv>ުG\(V?wy{WXG}DΜ9]U!0c 9;+ƚ5khԨ(UƍS ǎN!$ vt'1t>M}د% JcfN_6)vQdΨQjA!I2|vQb gM_~tȜCϾ}t'ɰmO;pC[Q= 74nY1%K(Zj̙3ڧz'Yd I!ױcG>,^Yj^$,Xv%˘i?8^reA %J8gl{XaS6C2SX[ Vw aR;P 6X\mR%)] 7N!>p/TR^,X@w DGGLժU&~i'bp»Q 4zM~9ƢEM!ըQ;vB``8B BLΗÉ٘-Y,9l*?ZwHJR=՝D#wnuMN-X\stQd:A_$RSŨu'n"_ᙿp玺tQdM/aÆl21lre֮]K=(S ~!l`O:uPx[rԩ~PIжm9?#vJ>!OrJ~W^xqn+e3iYILb0`tnsFUgP|ԕ+0}:DFN"QZP681""t'&uh aL8}SzL`l ӝD?ɓB a, 55}pB:vH 7񤦦ùsx"'OѣX,lԨ9!9s{|ڵ^Kݙ6mCYt)3fpJF!4hah߾=ٳgGIÆ0dZ1HINeqKm_~qhJ%K2U}օR fqL&@B[ծ #Fԩdp&Ow1WX7V@ G ʗy!/~~w_w#);wHLNr|<^\vԴ4k?oLDEeܧ-ȓO ?ԝ$åK>5a{y)ZAy޽C{]o޺EJj*))s>6Wx SS5 6vbb dIل0Bˁ8pNצM g۶mwܷo_r׾Z|9~!9s4<BduJk׮ӇJf5 ̥m[ujZI2; ,9!U޾=yyx\Y >sݜs$̝ F\Ȝ-5|$`=ѝ-xyyQhQjV@ X|bV Hsp<'HsǠ 2@i5 ƌ|L8GӦ>eV;V!(U=K{JX8}"ᄆaniz_g)3g2N+XP[.!e7^{517oe}mʕ[C7ٳS !DVӰaC^z%ZlI5rVy^]-ܩ;I~@0?'7W.^mԈW5b|w/Kmcv@&P5oغUy S{NavuM;[KkJfMivX l\y=p+wߩk7\Bc#pˋQF!g}F\\?<ҥ s~DZc 'Y˗2d+WZj{?~\w4.a {)\JB#sxoSfƞVT͸qPV6שF ʛڵcϴiD]ˢ#h] 5/v8B;~8V>, .$99Yw,afeI2$%^Y!3g]]cdULEʥT^.oaN2~>>ڨ_>׬aƠA4VnP: pp,T9> tN5VBt'p玺!v! @r1cnJ*VT_fr L8Ńs3}@y81""t'(]Z-I|<̜ ONTE `L΄/_Cz1cԮ\gda3!!́0'%NUOi#81̓?НDrVv^|kMgt_?`  2`_HL .LtwF3f63gwޙ~o_ޑxB!2ҥK > *duGfTjxn&)p$Nsuޘ6q&@ax.pIG[xᰚ5ah)EEԩ;fxέZ_x`HZqx*_ީs%M>u Z|ĨSNryrd?Iڵ42 dʕGGӧCXlwgXaWNٞk)յkWl{c/ZȮlB!s^u5j$=ܑW^g?/k~g8yUOzssCc4Q͛(  Sԍ{4o}Na-, fVt#$3/;}`r6LhO?rNcшxevST(>'ii{t6䧟`HIѝD W?N~îF{p# :NaRZMmK}ǹ qi/UÛ78ƑM/˺u뤬IZZ .g}ZjU'Z`ƾ~:QQQ>}'NoqQӝ6ӰaCmF5\:0yB͈)Y,O=;ݾ eɶm=;KF^;.s*I<;_},SSU̠ ][wa^Hco€ 1-M]S B.0nO];b Bƺ{…}M$~v?ۓ&NzþN3өF!#^n(00sw^ʖ-;pSs"̤I(R(P Q^%fϞ=k !ȟ??/2ӧOٳ۷iٲ%NKڵՓfr>ٌ_[țcb}VTlbcaTsWC)o3 <鄙plzҥKf"kq+Vxoݺŵk2Ojjj^ϊ!DE={6ϟgĉ-Zԩs^t_|8в%;S`s7nth ///~ Sst'xxu)ED׮N"lTlYFth|AK5lCNaYuM9gThR. o>p”7G>wNm_مp욘X@@jf׮]3n8sJ>S_mBc̘1>}w}0ӧѣׄSm~IZ$u;5~SW7{y~IJIѝDأsghNw k q­ӣePС^GEϥp?m۪{; tn o :aNT%Y0˲ݐk׮MX"r.???C)^8^^^# cX7oײgNF(S 6ȹs8piYz*ׯ[n6#YIΜ9yݻ7}[Flٲ>f+%1B=]OdشI56g-[94F`޼LߠD⡶nUWvQdΠAҞ=dؾOI rNcC]/vTהN7yuuڹSw ?@O>37X}9~6)#W SԠA UɄIdM/ooo.\H*]vq˗gTT׿Ŝ9s ʦBdR2eعs'3gd6ͬ &ТE z)n,8Xm~jysI觓'~2y H$kjPcӍ"sƍSNԝ$CH^~YwanM5)"PсBBCf;(2gXI;I}CI=FRY}y>S~%ȾJ]S]8rۯ3f̐ /ݿ!5c C6@={6iϞ=4/Bar^^^ 6ܹs>~ZZ$==˕K-(;ᚳ۪|}_e 8KkӹIcBKMUetQd6"-MceGl>ތtA#X`rؽۈфPV'U.0~<)/qT3>3שFeM5jo!6ϟg=M .L- gȑbaņ+y|qB߉h"nLK\fAx$Vr%"##wΜ>ܘ@"`l ӝDأHl2]7n';Af1tL/?Nb$}ʉzm >_έNeOkSe>)3jYV߿?٤ϟos}dϞy֭KʕO>[n>Bx5ksNoIf~<ʨŋ0e DGN&8<·QL'Vhcǝ4iթ`!z50ۦO?ҥ/&^!z{dH<[Vuw$:Kh*$'‹>k`YcTä$G3'ה;Zb=M'dT|ɕ#齃b/cFɄ'a|yb`k"-[+t'p*=z4\.O8_@߸aX`lxmUS'M/!<ԁx <<*U,Kzxbz-J(a؉ͮP;|O:nqܩDtȜ_W;wNa.(X|{nz٣~ h"szT)3߿-S'f#Q\w q#]R^6{pO#FrfiNʊL1zbhիQ+%y&&MA\r塯/iԨ[ly))GƲ`d ݻw[fj#fذa|͛oɞ={25?~#uF||]s!jժexgϞ5t<||`8YНDxtSsj?ם"Ţʙ}$ƵNZeS6PVoѝBX:`kךT(r.Uw?fiժTX@=z4<Ξ=˷~Kxx8'Oȑ#9sŶgAÞ={ȕ+գqtޝeoq.^ݻoIOOx1114nܘuҴiS/NPP5H"VMMM믿ŋ9s/f^n۶mUV<*T"EТE Bᨲe˒={vN?ސqDRZ1S3UbEi'HLyN7+P >ec IJ 5U4H20 Uϸ@QCG\}jHHHНF}[ ;֌%ݺeX k`AnnO6'OdOq%''sn߾]^7ofvex7ocvA Bg" )o(BA?ׯ6b НZDZRQ"#atY!Oz0tΝS!6Vw,-3yz$~]1 .2vj*736 3N9s`hȝ[wa_T-ӝ$CX:ې74!n0l,"")1c ~g&(QK"CjYe5klH!7Չݻu'ɰcGѯ$SڥQtȜ}ՂdسGݧ//i11؛Q7jd @kj~&I2,] C};|fL6B!u'x);Vw k2a;w_z8 7'^B!"&&˛7ac qWЩ֎U%Ēt'aRьrvQdNp:sFw VAм$e-|߿~|B!WN7>u$BB (|=ܺݻ3 i׭SW^ѝDB!o6lz XIIѝFySm,U>%Krɒ.f> Ϗf5k3X2KdU`AQCwaYiݾʇ,<;G/4|zϸ(5U9 4Fx>>0~bcuQ`2uMtX,Lt©M<敞~)n# B!<ƍILL4l\w kÔ);3Ϩtfr"L .Nntǖ}i3y)^|}Ĩ}.NR^q_[?]S/k9IɦB! o5t{x6mkW)?J޼;GVнC=l2y6-Z@^SX S')t'q[o/ZD:eTfЯs;$<f4OEa矇tN;;3L^g>n瞃F` vHdz8IIɦB!׿Ezzc6iȔ}ޭX,{6*d>sGwaݡuk)<1Nv?-[ LԪT)s/ڶ5CU=SUlz !BÇb~~~ԕB1cT'3ٰ֭ӝBkHUKw k7nBÆ~NnN(b]GjԤ ^^^Ð!а9}\S7qc)N؜ʒoߦK6[׭K92д8d.XBw aB%B0`᧼Zlc a`(YRw k+WwN! eNaҴ…B)Y[NRrI}_gax80WYZw ajU)_N1M¯aaN /O:e>_;0B!pS&LlJD֓/ZəSw ɪ,ѣ{Mܹu'p** IDATo{ſ|t'ɐ$n0`t~8r)(W7T`앖˖{d˦…u'ɐFݫ;{orV唱T)Zթoŋ;e^X,N"LD6B!Ж-[4iK/>6+_^-(ɕ+0mj-O2滦` НDأdIl&0{6>;)Y,̚ŲoqctŊ~]]SNN"Qu'ɐɓIko'ӹ3V@uM9m~%%^ǏN"LB6B!́ԩe ^{5|}} Wԭz0y2N"Qe&/k%I=~Z"4(uMEEY}y70zB.k Wڝ;1m6mru*WI R6bb`pAwa*UwM]mz۶ԩDfa/gNdPKVhbc4qlz !´ܹC~8o/C楗^͛-[6x ![CSX{hf+NR$$N"Ѭ;p5K[nz`\Bەb'u #:*_ڴ]NKLԝDh$^B!0.\HJxwb,RSSywر#I6BY>>>{N[a󳺑#Ui:3ٸ֭ӝBk0xXv]֬ѝBk仙lW?[L *?ƌaӁX,t۷x1 㬓?;z{3{P>p 4nl옎ڱVҝBثx\:W۵ ~C.:u ݺѾaCc+_|ի4 ?#>&zDD!W^e͚5YlٲQfM{9֭K:u(i IIIرM6i&b`A *Ă \>h\Q", Q} ѝFتY3ضt$NYԆJ0iTݧ-ҝP 7ovO ŊQD *,I%XO+Fqh89oO[x8lݚ-Zvƍa@u,"#a4Ӡ ~֘Źsjk r7oÇ@%RrRwcE p4畄EFoO˩S8{ѿZթûF?Y ӧ;#ΟW&TpB!GKOO_~UX1ԩCթT+VbŊq"͛7 ѣi !Bx8___V\ɘ1ctG¹jքѣuS'ͷuL .N"QZPn!{lۣ&L 6UZP6X6 "#u'(_|ի0cNhՊ&Q8~=ʔ1穸8uMEDN"@6B!ooajR=E ѥKɦB!?Ahh(M6Eڴѝ/2*)){tNa-4T]SoN"Ѿ)kЀ gtGɼ6mK)?J޼;Gн.^ԝm<6>7%CЫTIdK!ȑ'm64tzLmիu85ҝpON!Q`AM7F5ӝ`N"ѫڨn#@ ɓ\8׽P5aRHM՝DD6B!ŋSXKHPt'(\X-4mb"̛'ON")__NB@~fnծ/gAYQS 9sN!) .Gu'PT@F ~7=5,=%Jd9g|t'ɐ$AY B}|g\|/ݻL5[.} 8ooyQzIPc&.Ґ]UjԘ:ΟםDأR%ܧr 4>^ټhRF&jf DQ36Z w L{-65 ,ΧKBLlߞ}\9rhN|*Ub֐!D]1cPH+YהxpӺȦB!B9h۶-+V 66~cO29s[nT;Ν2sf=M-ZT.?n`7lV ˔D\*lR%tv{mtoB_=NoL0>/穏>N̜cǢmӦh۴) Dzͱ쎏˗WBӢn*uUu$mjl$ӧKA3(ծ]O>$|I@||<֭[7_~ݻ;ŋGӦMѮ];t ]tA"0MQyz$V+ÂE-65X ^]h0P^V;Ú5ҦzJ; zNB&^xAV nFE, GSRp$) RRp"5Gqlz:^8 ,BBBPLX\9 CԊTAԎDuQҡmv/}Qof$˗E?rŊo-7ވ;;C'NDj*8~4RΝ/-U,W!*-+zDjV*V*Q0-g8\))~ *,zQWT)n[}ѣػw/<#G->>O?\jժjժVШQ#4n 6dHCh)FNN#9sNC&1cgΕοӐh9O=a^;ÂҦxr?O}?nviS.i'!X""""""d MS-^ ,]LhFieKv_~)* NF[k[XH;zYSح\ ,\L=4оv UxqZcыBBd!յ8XY^; jNa7ovv 25~Z;7Nax1lv rE/"""""\ҡ!=x`^$d""B:KN␑!kڥL//m\9$.fɔO|ʔp$/KڶM; (YRTJI23e͛PyZU;Cv5X""""""ZFGu IN&OMSIDzҡHRSSC:u]F~Q`PvI:sF; h>\;2i$\2^vsL t额nF` 'G; 0U;ݦMҦ~ݵS +ID>@Ϟ)~U֍|Y; [;Νf*/NSǢ;-Zh3Qp3hR;ݗ_Kh SF[k+OSgS;݊…)3i[ X@;6 Q;ݚ5)<<jNa7o)Tt4Pv  NA(v ~L4hONAbbƍS-^ ,[L͚iSϵSi,zy!%Kj'qȐvNB&åM-Egv$dlY NpLmv2QJ8df~lެLJZU;CvLǺqv2Ԩ뵓Y,zyA='IDTtT`T!$dNkSgӦh'!5k{_Z0s&LT&b-}<}ID*r!QXvtw{Itv!""""" pw .!88uJ; h =Z;ѣ ɓIcj;~4 8qB; h\:ɉr:vL; h$TR|JLNB&5 >$E^DDDDDD _;]l\ML<0pv 8୷w >;if̐Q:|2D;ݡC2$dC`pvSOk'!m#Fh;rDfeHNNREDDDDDA.]Sm̝+P0@_e휬,$d)-[ٳ+WGzNa믲nID^@)v-32=}S#^`ыĘ12X YXX;=Z; $_}|v 25rLHV.NAyhN;ݪU)԰a@ǎ)֬>X;<I;ݺuy8EDDDDDd*&SG;݂k S11@Tv oNAƏ4Na'_k S11@),"=hY3vD^xhB;gɍ^DDDDDDålY$/ʴ<۷k'!eJ NpLImv2QJ8dfԙ[h'!RN␕%m꧟iSիk'q,6zz$d*&UK;ݼyڵ) ="*J$i ;:uM9L)ISIDFצI#GMSI -A<]hۿ9S~ӥ 0hv 8L{/0dv Cdd3ID)6uv2Ѷ-0bv #GHLi.!A Ni')X""""""NaϲIVv2ѿ?Эv -[d+WGzNaY~^޽SmsIG9Wݻwߕ(t&IlLɚaыȟFW_|L yv +ئ3i[ cdj0HX@;2DFu뀹s9B4HF9s$ ^DDDDDDDEi[P:)8EG˺dB)~QpsR#|ŋKSqd=@gr4f вv /s ^DDDDD$$D;Cٲҡ\v˗em۴ҥC9"B;CfLsev2DFj'qʒXI;  TI,KF_Lji7djxn]v,S/X""""""uՁ`tA$dV- $ii#IDҦ邋e}Hy N␞.k1j'!r*]Z;CFݻⴲe8\GHyƢQ~iRQ $ǎ''Nh'!-ZtO SML=L4j$ʁ)yKHNB&6L k'!QQޗ L*QEDDDDDtI\0sҡs!)P^t !KMNB&ڵyF;]|)ئӝw#GjKHbSIDV)6nNX""""""oݺiۼYFdfj'!> 衝 ޽^Sm.=]L)@{7pv 2խ| $2%kzv2)NNX""""""*#Gmh`"djm[T . h SC@evʋAd} ٸ;,$dbkWG,zY",\XL9f:d*&F $K_|L'˘1k[XX;5J;BE/""""`tv2Qt(GDh'qf*, ?Nbliv 2"յ8}v2 Ԭ Nv T N11@Tv Ѷm)DZGu IKfNB&W$IDdP$o9|*U%\Sxe8\(n߮L-+m*<\; ^DDDDDy`D V-Z/q`OfR$IIr:zT; h8 NIJHNB&4f >^; Tj*0mpv2QN)^DDDDD*.NFRi'!:k;xP:jΜNB&:v Naw0utRixv@rv2qȑ)N&Mb VZGkKL}Rv2qn-ȺL|zNam0kpv2ѻ7Ыv e=GNa{7@Fv2ѭпv 3ӵ NaL}v2ѥ 0hv 2ĢQ`"djm[vVL t蠝٦ِ!=h[7O;4Y;݆ ܹei'!J+3@Vv2ѿ)EDDDDT,ZXLi[X\;4Nax1)Tt4мv >>T;;V $Kʹ2a +OS#eZV *,zW4[j'!aaҡ\v,:y&$dX1)V!'#ず5S8X$d*&SG;݂2:SL nBEOixA g35&;'mj~$djUiSIeݜX$dR%iS%Kj'q}W.mlY$/ʚq;vh'!eʅDI._RmNB&J6ĢQar80aKY+|%%'Gj'!^:uJTBv2Ѡ\HRRɓx$d"*J:ԩIDݺצΜONB&j S^DDDDDi5aôS:$IDv3hLb;}V;]B0iLn .1Q.$JJNB&ZƌNaw\qv2Ѣ0nv ^DDDDDѶmו+ID@^)o.]NB&zNak{2=݁~SLLt n~`Ly$dK` vqq2utZv2q!) +el NÇk[X@;:vki SO> tn`\NaYYID@n)l23GzNAnEDDDDT-Z,_L4inb/Shysv}|v 25v,p).,NAFZNaWlSlHMv|#)8m\`ы0ʒ7mNB&EUN␓|I@͚), ?X^; Na7>fv 2 ԫn`*d*&__;…)TL Шv rE/""""9Yb~$djU:4T;Cz:;@lv2Qt(,iS{h'!ҦʔNpcv2Q/e`,Y7OҦ""8dfgt|BH$t 'GƍC9$%IJLNB&6 6 Lk'!7$ʁi`Ta$dn]P$gӧj'!jy*-M.NNB&W6¢QQqtj'!O?.><HINB&xYv RJNNB&ZFNaL z >-[ch;vL.8yR; h7N;ҦNB&6 8ہ.]NB&~G;ݮ]2XFv2ѽ;Яv ~[Ѥӵ+0`v M?Lt ..1CFP^``vӦhB >;Æi?EDDDDTԬ^-Sp:vkk SO> tn`\Naq#0gLu( 衝n6Y7e$dwoW/%Ke˴Shysv~ |v 25v,pm).sC;ݲe)smh`"djm[vVL t蠝cы(>P| ԨddITL Pv 5kS^=v H2~}v +Vh S11@F)//NAbbd/RâQQ.kj'!U%Jh'qpx`$dbE`xL$/ʚq;vh'!y|y$/g4b|J6!3S9ܺU;  "#8dei6i'!!!ҦUNRdEDDDDT%%'IDÆHIx$d⦛M> L6֕bj IMONB&jՒ@r,0cpv2QF൩sM߯LT*BCI,zu@Jv2qjKHL֭QS%&&HO˖W 9vLɓIm/qiSǏk'!͚E=L4nxm`ы]22݁}S#S^Lt D;1hV;ʕE)@)V,NAG;E9X0wСԮn|IH)&WO;}LEGk[XB;6Nah|v 25~<Фv ŋ/NA͵S ,z݅ Ӟ=IDŊQSveZ]r夣\9$.fL)#mbE$WȔt[j'!%JHJ$YY6i'!ŊIVM;CNLI@͚) =z#IM7Ir II6Lԫ'$5:8tH; ];ٳIDצΝ6v2QP$rqZlv2Q%j,zsGH"9Y; hF $ @Rv2qW 9vLɓImZqiSǏk'!͚^+) 88zT; h8pTbv2ѠA]VȰEDDDDD#W^L< )~8^; ~'SH4$dS'``vӧ/ >;Æi;tHFj'!O?.>^.NKINB&xY^DDDDD~ei'!O{?nvY%$dᇁ>}SLt망P I!! [TuꤝLt$${j'qKKYrOkvҹpUwHk,_(<)z*mNr%PTO;=$-[l.ҦxByAR矵8]+mœQ8Pw#S4]DDT[nw؁͛+&""opw""""""r[;E@DDDDDDDDDDDDX""""""""""""ǢPDDD-44իWTZ'"r.$D;QPbыo'NЎADei' """""" Jސ^DDDDDDDDDDDDX""""""""""""5 7-[j "ΝSKRB$IJR67NoHDDDDDDDDDDDDAE/"""""""""""" z,zQcы^DDDDDDDDDDDDX""""""""""""Ǣ^7o)>K,K;  _nۖ-!+gΨ+RSg幱,re)ŕ+=Ϟul "BQٶͿiiunq"#|Y_v @T?k+W+u++^#) XXؽطھV2@˖@vw:xWKO5 ѯ?Qlsp"p̕+u/ɑ8Ko67J{` 23QH/"""""MHv%[nڷ:u:wJNEqHqh0kyR+? 4mz99ԩJ( ,\h(Wr aaw]}uj'*8? ,]jX"?EDDDDD ׵*TF4NC Y``tO;&M#ٷ+_8}Z:"^W g`X3?tcY"?EDDDDDG9y[!X1U+vmvp0p:x5o IDATY? ݻh`N !ǰiWΔ, 4kvLCZ<HIcceJ <42k>ٰh??]>*S8!]];[䟗_D}{X"?EDDDDDykx10c}AqOT/Zv]f,^mX:AתYS^cxbSѣc+~]P6ަ}{`{w <>/_u> GZcܱ,_›JdVW֝;v.Y-S0֮^yE.pE/"^DDDDD'O=$x㣏}?ŋ^\hb鄧).h,z…siS̏o H7z,1?&;Zx}xncl*⡡Cߖi l` `<۱ED`ы))I9}vŋKtic][oy6,Lցjx^y?]Ϣ˒TZ!!qK/I~x1瑩__:Kl,pW^8/J˛QO> ̚Udnp>n.<T vy)x@ٲ̙z̔<(x$rrd7p^*UJ}utv;AQ#GpljyXR.͏L*TDD^`ы˛Nnz xM۝<)W`STFaYRz}(!kp?ǏiIǎu]No_d "աwըcwra'11jQ^DDDDD_tJGy>)p +2m+|t钿BB=jDDժy]dת0a231]QƑE/""""" ~!!)9O/LZrrG 5k _|_ . ] 﨣75ڵM;.Qcы =od ڵe-ğJUOHԙoN-LŊs渞ȑCDx ~}sg`24G{Go" ¢y1`ϢeZ̞z 8w'e\\+Ӧ9/!`wOM bEg!" ,zQq}ޭysgѲzv Vs+Wukws:vG|".pjT"иRS?Q`ы  *v;w G?ŋ@Lm &;/pN.D$-.]NQ5hvǎo" .5jxަNx뵘ܙ:}i:@nǕΝ[o/5H… …AD$X"""""¥re$%v Fof@Xt!cؿ !*ʎZR!,̻ʔDDA">Q*mr`>Μq#Lo-k:YiӀ,Yn!s p,dfv F̘~Zf &7J:u,DEՉ]\PRRy?GA9yx,M&W]cы] ̙hHMPj D!?Qzm€{q.]> XP:*ϟin[ÆI?\$?myΝGVI+w$+%JNSAu4_+5S/Xz6+K1{G M@TTgrt8_IO8r.۾\ܖ-rۺU:rG*z9#=+T5FNuodek>7˖-As^JJ JZ*?Ęw_̝+M<)[nG9_;+Kn?&#Cs/K\?eɱ}ֻ K [F}<4\"9s:;W\tiM) bN`t/{ 4l(ˀ;//y1ŋK9/תLGӦKz^."B>O ~|oj7}j>劏f̐zGx޾V-,Яй}Je x!/""""""|5iZ:S'oӽtZHO^xG'P駁G:Wqm7gؿ_%xWm&#~ܭg<1w.0bw&LmsٹSK<)R^m2``yyZN0{LG'^xAFe)~7k4^"&HqWw|!Рdns}3s옌8{,[A[xۥ潘 yeK#}y5˒"ʸqލvL)sW(QkgJNƏs­+M ^DDDDD_ZE}dOW(%KQd;iBj'dR:p@?k^_xK:-ҀÇ_~qbŀ[Fuڑke4D^DUKggVANޭ6n4+R=[: MRjVF awm/ =k浝3gYѕ+tLH 7:A)Δ//={w7n, oք_)n~d輸8amwuL`,ٗՓ%nv:߬,ߝ[-,kڻײRS-%JL%K,kWϙg9'mzW^of<*]Fs\?vF˪]u-eYVޮWc,;ֲBB{+]ڲ~-dI?[6I%'= '"W{%>52ɓ}>㏖U/^ܲzʲv1YYzeOaY[z>f?#߹ӲV b,k,~w._ߗ28IӊLǸxѲOwzRŲ?~ͲjԸ!!5`eΖ`v3-kYk=_y?eI1hв>Pk;fY/hYJ׿ZVNY9;ҤerۻײRR8zԲ>̲zsmʲ\=ו+u]}Y֫ZVFefZ\g?WU5ye9sRS-k4\YZR2珩Yxa}з-[ukIU/qPd6~ݻ}˒k{]}V=lMk3ҥiI]u#""""BGz}ez>ܹ{diTCWs}fv} Ov\ j_{4+;[F\zZвbc-)JOxZJF+}`L=zh'7QZ}|+9YF-gYrpսee*Vt g9{VTɻ,'OZ։^եeedF42,|W-#W9ڴIzVFxx]27(Zs,kx uݏ??ƫmۑ^չe *Wߞx²yIJuA5kzR<Сd;2RF b@?K&y!#rr,ǝ}ƍېs}Ϝ9{7@?ܷL%yy__>~פIgDDDDDTT++˲|S:p=ϗBL}\9ٖլ۷Ѳ ;n׷"UFNߟ׸qdjt=xPڬvV%W_yy iۿ쏢eYV^cRc-Q|]w_uؤ,qqBɓyuӦ{sVʽU:ǹsuM?&,̲vkYRwVȽM>Ms;} y=X[r`AOvL|mbŤM9f맞p:[1nY#ITQQ_Sڦ,G^1c=Vdӕ_ΞmÇNYrն-`y&3G﫯d:WS? q6ՕOM\0v߯T 2lcIL&Mn?S4תV)BCB't/GM-ޠor(,Ybz03>ի;iiYP__=)932}'>,k&4i)#]ȜFɚt0aT̳Vd A2~}:wvMﳠuzu|MʮvZ[ 3xqۭ\iWA> \ߤW(Yz 1#UT)|;wWYl UL 5,/, h='<Ι3RU˅ bZ'(6yAX1+Ov\]弰4twYgʗ5\6ͻ  kʟGK|0j5k^ٱc=嗝׭pfyBBbW>77+HWNVΕΐ^ƍZÆղSG-H'?{YW UWۼY:]1MWm'O:=[Ώ?eey^\9onKM79&$}Kq͓P;2j܏<իck=ިRhy<47߷}^˛,K˨]w^}U.rhBFT.شIt9'-Za_~VZS&Vҙ yƁƛb Eߞ 7%seȫ]zՒ%}ɑ#z!!Wkp*bVb"z>P:_+;[7ɋs]o{HHp]ݹ}|hSi~iOvъGtFJ2{w}fΦM,zNoHDDDDD̳uaa@F2ާ IDAT2(Ym;'J{< g?|Y%11rPXT)44L]]OQp钌p_~\˛u_I_hq7/ zg)*deA/]ߗs}m?{>z)Nbaы{w]UuB0 DhQDD<%BE}P ڊRUh>dh@ & S@ 㷲2{>{o.|?kE=g}=So&""""ozHR?/~=0iRkX=5r^'6t/ |+0,z8S󟻟ͩnn ȟ?< E ffze`[#*۴q֭y-^l=VMys*37{:J8x]^,] eM3fWoT0xM`~%%˫KIo=~~TsmU+ ED{|f) """"""rRϤaazM7X)66nwo>s.eecD? n w)!rTO? wr$8tȷv68_)Ds?۷޽HII~ ̜_OwCwM/;ŨQ5./X4l(ϣJ;keב#%Á7h ? կo#ysguҸi"""""Ką _{Z`Qx<%818uU  M7l*+v}}sK7J|<^Z&N4TYlqK#o 2୷>}~gjIt.(ۓ'_m/^yyՊ =z7}€%A/""""""Skݻ?|ue믾,F"(Uun$^+&-=s)4iʕ{~%dh]N59mnE7OٳN=]i?iiP)`""""""frWׯaТ~ڽ)t=do.vr11@RRBצ~d׃M͛-$ave.F/~XV[O}ohܪUeǵʨcǀU$_O)I+)5g۶~eN~Ǡ'۷u߻v`B ,, X2GP<h@Ւ>ؿeK$j?ޔ;R#fH[zc_mlw2c}Lڴɝr;]J={ԅ- n헯;oRSݻj?,Yx׿ii/]hYlz".Ngsz_~EA/"""""ʬ.eb8[7e**>L}?bce R79*JJ,=cpz%'{o$ lR鮻̖o/h2CQ@ˬZ>UTX|y=x~ 11{RMYopu Qu;wOLtVny9' ;= ѯKLƏS܆ 2+ML3Ѩh|U =/rs<8z/%k֟GOGد~^ˁck"5U2S7Zq1oW)2X>*ll**wLxiϿrϿ7o| R_"k""""""w>_tA˭|iSzܹ2squIݒ7x1С~Sgw }AyMet}+4n Wӧݓ'ufffe[~(s)/`W1OF3 $(Q)2~u )S̶o*/ny'rUp<:{6;2fgJzݹ8rٺ_}߽;=.nn-[̚%cƸ[6A/"""""rI`;(/8wYQXhɓϽ{|{w;BgL 3|?.J|/ɜYƴ"i뫯Gå]xQW$b <^Aߪ*Х))k>)XLG:6Cgy=<ϗԹS<`um:wf㏁ɓ}o~|69{0GPʯP֭>utJIiǒ%fi`R_Z/oo۰C-W v>:TƴiiB8nz;/[w}Zcy}\Fpk^#@^[SSec}k7Xs..y X9=ON ^?l6z6o7}JtxCt|{ET9E (y(ݬ,z;=v?WU+|`ڝfd(5rdmiw^~AR͚YDRRe9u5ʺ^xAӧ}ի߉s;vҳR.jv6h_ɚ5J<}~{+ӓwߵu׉26.7R[ltR'O:K̚evlJ)j{~QmJ .ٳJmڤT||{}T?ryԼT5Y| w~*թپ&Vj,_nާ)zZ/>zYsaaJf{vmbb:|X^i|7&W+l^BNVYN+DDDDDt+?4kFV yzHiI2?f7JCmJK̔F+a?I Sj6TxCk\^uwoS)uRGJYO<9Ҧ,oQ_^zImZFDG+ηQdd(d}+u= +yR)ըcQ>?~&)r>ٸQ.zIIJkϢ:T3ZHG_;j¬Q/))JkgV3g~mJJPn uРnTZ:UWNfVo%Θ1_CO=ԁJ;'Ad lYs'pY-7WȈWdR?/:_9瘼:w~)rz@KV粊%v]u͛]iiխQ#V_~WJ=R_~=3x,d @V@orNth޼j>}:lu_Ujy̔g>}>U׏YڵfY盯x@y:atF=yR;tȑ=O_)TGQ9rD~> |_k TxfGeKIҽkնe(˵l)nV%)JL |Y#0c<1+}KWwe p-e?_Ni#scΫWۛsgסwoڵ.WR"s:%CIhOI ٱh| x%Igz5m*t5HZ¦M<8wNy9{VR</@k Tӧ (/\R(z2qK.#C-ɹn[Mw  cl_Uwꔜ{ii*jj{Y4Pʹز#T&%eDD5sgaCI{ x4ol&*}2OnY%ArjMΡv\ׯ7?6%`^rv*LJΪU6I:[O+rsÇ%\judnI4c=) *9v:t{gͫi壏$}'J5o3//<\ΙD筸XRR<#5yjX$%E1u4]V=O^ٺv~s_hB%Z3_NK1N,Zdw1C@Rܲo__:kӎG.…ԛ1c$WNI{@f̰==]RrIQܤ pr?={Acn8QӓޗWbb>MխR[,;/<]\e zik5{}lq{K~nsxIKsw;sQjR#FM7IJYVz{U/<謌6m2}{挳}6guIMkB?A +1^zM7ϓW^ 7FUƌ1HIV_,]V^w;ifYa];3iM_11CuG vR{Mlե9QQU2j>d85;&5ĩSJ wANGRp5"""""ڮx} kKH  ]vxL~D2g(r""BFpC}U);{/\6LFsmV5B;%ˁSZ[(C74i"rwF&_1!ٳx1{' ѣ@bp͚I:8`͒[} RXp! bH',L.=oAr ̽40V6zHR오$(kmyӯ0aBggqˈKlI%f\ٌF{XpKAo%ǖizN$7yr"I׶+_:ȿKG˗~jyL,X,] n]f͒FJ'Tge$%N4=& n5 /~$nHJ~|s瞓{ɹߠ/1unHNԽv]Tr;ް8sr[KZ8O6 Ν+iC:uuNk["#_wnZ<._.)jg I8{omD+_w3y ׫s;qUQR)r2~2曒NJttvy)q*3Sow3G""""""),Q[))ȟ/#;7?8JNbbeN '.\[ʨ'aQ#"!A:2DpKE|Ud\x )Kk~i^>҉íFWV /dev/null)" if [[ $? != 0 ]]; then unset COMPREPLY return 0 fi # If the completion script exists, attempt completion by invoking the script # in a subshell, supplying COMP_WORDS and COMP_CWORD. Capture the output as # the completion reply. If the completion script failed, don't attempt # completion. if [[ -f "$completion_path" ]] ; then COMPREPLY=( $(COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD="${COMP_CWORD}" "$completion_path" 2> /dev/null) ) if [[ $? != 0 ]]; then unset COMPREPLY return 0 fi else unset COMPREPLY return 0 fi return 0 } # Enable default readline and bash completion behavior when `_qiime_completion` # doesn't have a reply. complete -F _qiime_completion -o default -o bashdefault qiime # Execute a `qiime` command (any command will do) so that tab-completion will # work out-of-the-box (e.g. with a fresh installation of q2cli). Running a # command will create or refresh the cache if necessary, which contains the # actual completion script. # # Ignore stdout to avoid displaying help text to users enabling tab-completion. # stderr displays the note about cache refreshing, as that can take a few # moments to complete. qiime > /dev/null q2cli-2024.5.0/ci/000077500000000000000000000000001462552630000133545ustar00rootroot00000000000000q2cli-2024.5.0/ci/recipe/000077500000000000000000000000001462552630000146235ustar00rootroot00000000000000q2cli-2024.5.0/ci/recipe/meta.yaml000066400000000000000000000012721462552630000164370ustar00rootroot00000000000000{% set data = load_setup_py_data() %} {% set version = data.get('version') or 'placehold' %} package: name: q2cli version: {{ version }} source: path: ../.. build: script: make install entry_points: - qiime=q2cli.__main__:qiime requirements: host: - python {{ python }} - setuptools run: - python {{ python }} - pip - click >=8.1 - qiime2 {{ qiime2_epoch }}.* test: requires: - qiime2 >={{ qiime2 }} - pytest - pytest-xdist - q2-mystery-stew imports: - q2cli commands: - QIIMETEST= qiime --help - QIIMETEST= py.test --pyargs q2cli about: home: https://qiime2.org license: BSD-3-Clause license_family: BSD q2cli-2024.5.0/hooks/000077500000000000000000000000001462552630000141045ustar00rootroot00000000000000q2cli-2024.5.0/hooks/50_activate_q2cli_tab_completion.sh000066400000000000000000000002761462552630000227220ustar00rootroot00000000000000if [ -n "${ZSH_VERSION-}" ]; then autoload -Uz compinit && compinit && autoload bashcompinit && bashcompinit && source tab-qiime elif [ -n "${BASH_VERSION-}" ]; then source tab-qiime fi q2cli-2024.5.0/q2cli/000077500000000000000000000000001462552630000137735ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/__init__.py000066400000000000000000000006731462552630000161120ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- from ._version import get_versions __version__ = get_versions()['version'] del get_versions q2cli-2024.5.0/q2cli/__main__.py000066400000000000000000000023731462552630000160720ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import click import q2cli.commands ROOT_COMMAND_HELP = """\ QIIME 2 command-line interface (q2cli) -------------------------------------- To get help with QIIME 2, visit https://qiime2.org. To enable tab completion in Bash, run the following command or add it to your \ .bashrc/.bash_profile: source tab-qiime To enable tab completion in ZSH, run the following commands or add them to \ your .zshrc: \b autoload -Uz compinit && compinit autoload bashcompinit && bashcompinit source tab-qiime """ # Entry point for CLI @click.command(cls=q2cli.commands.RootCommand, invoke_without_command=True, no_args_is_help=True, help=ROOT_COMMAND_HELP) @click.version_option(prog_name='q2cli', message='%(prog)s version %(version)s\nRun `qiime info` ' 'for more version details.') def qiime(): pass if __name__ == '__main__': qiime() q2cli-2024.5.0/q2cli/_version.py000066400000000000000000000441171462552630000162000ustar00rootroot00000000000000 # This file helps to compute a version number in source trees obtained from # git-archive tarball (such as those provided by githubs download-from-tag # feature). Distribution tarballs (built by setup.py sdist) and build # directories (produced by setup.py build) will contain a much shorter file # that just contains the computed version number. # This file is released into the public domain. Generated by # versioneer-0.18 (https://github.com/warner/python-versioneer) """Git implementation of _version.py.""" import errno import os import re import subprocess import sys def get_keywords(): """Get the keywords needed to look up the version information.""" # these strings will be replaced by git during git-archive. # setup.py/versioneer.py will grep for the variable names, so they must # each be defined on a line of their own. _version.py will just call # get_keywords(). git_refnames = " (tag: 2024.5.0, Release-2024.5)" git_full = "2bbd82625406a601add2182ad087aaeae0d1d701" git_date = "2024-05-29 04:19:12 +0000" keywords = {"refnames": git_refnames, "full": git_full, "date": git_date} return keywords class VersioneerConfig: """Container for Versioneer configuration parameters.""" def get_config(): """Create, populate and return the VersioneerConfig() object.""" # these strings are filled in when 'setup.py versioneer' creates # _version.py cfg = VersioneerConfig() cfg.VCS = "git" cfg.style = "pep440" cfg.tag_prefix = "" cfg.parentdir_prefix = "q2cli-" cfg.versionfile_source = "q2cli/_version.py" cfg.verbose = False return cfg class NotThisMethod(Exception): """Exception raised if a method is not valid for the current scenario.""" LONG_VERSION_PY = {} HANDLERS = {} def register_vcs_handler(vcs, method): # decorator """Decorator to mark a method as the handler for a particular VCS.""" def decorate(f): """Store f in HANDLERS[vcs][method].""" if vcs not in HANDLERS: HANDLERS[vcs] = {} HANDLERS[vcs][method] = f return f return decorate def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None): """Call the given command(s).""" assert isinstance(commands, list) p = None for c in commands: try: dispcmd = str([c] + args) # remember shell=False, so use git.cmd on windows, not just git p = subprocess.Popen([c] + args, cwd=cwd, env=env, stdout=subprocess.PIPE, stderr=(subprocess.PIPE if hide_stderr else None)) break except EnvironmentError: e = sys.exc_info()[1] if e.errno == errno.ENOENT: continue if verbose: print("unable to run %s" % dispcmd) print(e) return None, None else: if verbose: print("unable to find command, tried %s" % (commands,)) return None, None stdout = p.communicate()[0].strip() if sys.version_info[0] >= 3: stdout = stdout.decode() if p.returncode != 0: if verbose: print("unable to run %s (error)" % dispcmd) print("stdout was %s" % stdout) return None, p.returncode return stdout, p.returncode def versions_from_parentdir(parentdir_prefix, root, verbose): """Try to determine the version from the parent directory name. Source tarballs conventionally unpack into a directory that includes both the project name and a version string. We will also support searching up two directory levels for an appropriately named parent directory """ rootdirs = [] for i in range(3): dirname = os.path.basename(root) if dirname.startswith(parentdir_prefix): return {"version": dirname[len(parentdir_prefix):], "full-revisionid": None, "dirty": False, "error": None, "date": None} else: rootdirs.append(root) root = os.path.dirname(root) # up a level if verbose: print("Tried directories %s but none started with prefix %s" % (str(rootdirs), parentdir_prefix)) raise NotThisMethod("rootdir doesn't start with parentdir_prefix") @register_vcs_handler("git", "get_keywords") def git_get_keywords(versionfile_abs): """Extract version information from the given file.""" # the code embedded in _version.py can just fetch the value of these # keywords. When used from setup.py, we don't want to import _version.py, # so we do it with a regexp instead. This function is not used from # _version.py. keywords = {} try: f = open(versionfile_abs, "r") for line in f.readlines(): if line.strip().startswith("git_refnames ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["refnames"] = mo.group(1) if line.strip().startswith("git_full ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["full"] = mo.group(1) if line.strip().startswith("git_date ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["date"] = mo.group(1) f.close() except EnvironmentError: pass return keywords @register_vcs_handler("git", "keywords") def git_versions_from_keywords(keywords, tag_prefix, verbose): """Get version information from git keywords.""" if not keywords: raise NotThisMethod("no keywords at all, weird") date = keywords.get("date") if date is not None: # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant # datestamp. However we prefer "%ci" (which expands to an "ISO-8601 # -like" string, which we must then edit to make compliant), because # it's been around since git-1.5.3, and it's too difficult to # discover which version we're using, or to work around using an # older one. date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) refnames = keywords["refnames"].strip() if refnames.startswith("$Format"): if verbose: print("keywords are unexpanded, not using") raise NotThisMethod("unexpanded keywords, not a git-archive tarball") refs = set([r.strip() for r in refnames.strip("()").split(",")]) # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of # just "foo-1.0". If we see a "tag: " prefix, prefer those. TAG = "tag: " tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) if not tags: # Either we're using git < 1.8.3, or there really are no tags. We use # a heuristic: assume all version tags have a digit. The old git %d # expansion behaves like git log --decorate=short and strips out the # refs/heads/ and refs/tags/ prefixes that would let us distinguish # between branches and tags. By ignoring refnames without digits, we # filter out many common branch names like "release" and # "stabilization", as well as "HEAD" and "master". tags = set([r for r in refs if re.search(r'\d', r)]) if verbose: print("discarding '%s', no digits" % ",".join(refs - tags)) if verbose: print("likely tags: %s" % ",".join(sorted(tags))) for ref in sorted(tags): # sorting will prefer e.g. "2.0" over "2.0rc1" if ref.startswith(tag_prefix): r = ref[len(tag_prefix):] if verbose: print("picking %s" % r) return {"version": r, "full-revisionid": keywords["full"].strip(), "dirty": False, "error": None, "date": date} # no suitable tags, so version is "0+unknown", but full hex is still there if verbose: print("no suitable tags, using unknown + full revision id") return {"version": "0+unknown", "full-revisionid": keywords["full"].strip(), "dirty": False, "error": "no suitable tags", "date": None} @register_vcs_handler("git", "pieces_from_vcs") def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): """Get version from 'git describe' in the root of the source tree. This only gets called if the git-archive 'subst' keywords were *not* expanded, and _version.py hasn't already been rewritten with a short version string, meaning we're inside a checked out source tree. """ GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True) if rc != 0: if verbose: print("Directory %s not under git control" % root) raise NotThisMethod("'git rev-parse --git-dir' returned error") # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] # if there isn't one, this yields HEX[-dirty] (no NUM) describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", "--always", "--long", "--match", "%s*" % tag_prefix], cwd=root) # --long was added in git-1.5.5 if describe_out is None: raise NotThisMethod("'git describe' failed") describe_out = describe_out.strip() full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) if full_out is None: raise NotThisMethod("'git rev-parse' failed") full_out = full_out.strip() pieces = {} pieces["long"] = full_out pieces["short"] = full_out[:7] # maybe improved later pieces["error"] = None # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] # TAG might have hyphens. git_describe = describe_out # look for -dirty suffix dirty = git_describe.endswith("-dirty") pieces["dirty"] = dirty if dirty: git_describe = git_describe[:git_describe.rindex("-dirty")] # now we have TAG-NUM-gHEX or HEX if "-" in git_describe: # TAG-NUM-gHEX mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) if not mo: # unparseable. Maybe git-describe is misbehaving? pieces["error"] = ("unable to parse git-describe output: '%s'" % describe_out) return pieces # tag full_tag = mo.group(1) if not full_tag.startswith(tag_prefix): if verbose: fmt = "tag '%s' doesn't start with prefix '%s'" print(fmt % (full_tag, tag_prefix)) pieces["error"] = ("tag '%s' doesn't start with prefix '%s'" % (full_tag, tag_prefix)) return pieces pieces["closest-tag"] = full_tag[len(tag_prefix):] # distance: number of commits since tag pieces["distance"] = int(mo.group(2)) # commit: short hex revision ID pieces["short"] = mo.group(3) else: # HEX: no tags pieces["closest-tag"] = None count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root) pieces["distance"] = int(count_out) # total number of commits # commit date: see ISO-8601 comment in git_versions_from_keywords() date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[0].strip() pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) return pieces def plus_or_dot(pieces): """Return a + if we don't already have one, else return a .""" if "+" in pieces.get("closest-tag", ""): return "." return "+" def render_pep440(pieces): """Build up version string, with post-release "local version identifier". Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty Exceptions: 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += "%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" return rendered def render_pep440_pre(pieces): """TAG[.post.devDISTANCE] -- No -dirty. Exceptions: 1: no tags. 0.post.devDISTANCE """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += ".post.dev%d" % pieces["distance"] else: # exception #1 rendered = "0.post.dev%d" % pieces["distance"] return rendered def render_pep440_post(pieces): """TAG[.postDISTANCE[.dev0]+gHEX] . The ".dev0" means dirty. Note that .dev0 sorts backwards (a dirty tree will appear "older" than the corresponding clean one), but you shouldn't be releasing software with -dirty anyways. Exceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += plus_or_dot(pieces) rendered += "g%s" % pieces["short"] else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += "+g%s" % pieces["short"] return rendered def render_pep440_old(pieces): """TAG[.postDISTANCE[.dev0]] . The ".dev0" means dirty. Eexceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" return rendered def render_git_describe(pieces): """TAG[-DISTANCE-gHEX][-dirty]. Like 'git describe --tags --dirty --always'. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render_git_describe_long(pieces): """TAG-DISTANCE-gHEX[-dirty]. Like 'git describe --tags --dirty --always -long'. The distance/hash is unconditional. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render(pieces, style): """Render the given version pieces into the requested style.""" if pieces["error"]: return {"version": "unknown", "full-revisionid": pieces.get("long"), "dirty": None, "error": pieces["error"], "date": None} if not style or style == "default": style = "pep440" # the default if style == "pep440": rendered = render_pep440(pieces) elif style == "pep440-pre": rendered = render_pep440_pre(pieces) elif style == "pep440-post": rendered = render_pep440_post(pieces) elif style == "pep440-old": rendered = render_pep440_old(pieces) elif style == "git-describe": rendered = render_git_describe(pieces) elif style == "git-describe-long": rendered = render_git_describe_long(pieces) else: raise ValueError("unknown style '%s'" % style) return {"version": rendered, "full-revisionid": pieces["long"], "dirty": pieces["dirty"], "error": None, "date": pieces.get("date")} def get_versions(): """Get version information or return default if unable to do so.""" # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have # __file__, we can work backwards from there to the root. Some # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which # case we can only use expanded keywords. cfg = get_config() verbose = cfg.verbose try: return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose) except NotThisMethod: pass try: root = os.path.realpath(__file__) # versionfile_source is the relative path from the top of the source # tree (where the .git directory might live) to this file. Invert # this to find the root from __file__. for i in cfg.versionfile_source.split('/'): root = os.path.dirname(root) except NameError: return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to find root of source tree", "date": None} try: pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) return render(pieces, cfg.style) except NotThisMethod: pass try: if cfg.parentdir_prefix: return versions_from_parentdir(cfg.parentdir_prefix, root, verbose) except NotThisMethod: pass return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to compute version", "date": None} q2cli-2024.5.0/q2cli/builtin/000077500000000000000000000000001462552630000154415ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/builtin/__init__.py000066400000000000000000000005351462552630000175550ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- q2cli-2024.5.0/q2cli/builtin/dev.py000066400000000000000000000233271462552630000166000ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import click from q2cli.click.command import ToolCommand, ToolGroupCommand _COMBO_METAVAR = 'ARTIFACT/VISUALIZATION' @click.group(help='Utilities for developers and advanced users.', cls=ToolGroupCommand) def dev(): pass @dev.command(name='refresh-cache', short_help='Refresh CLI cache.', help="Refresh the CLI cache. Use this command if you are " "developing a plugin, or q2cli itself, and want your " "changes to take effect in the CLI. A refresh of the cache " "is necessary because package versions do not typically " "change each time an update is made to a package's code. " "Setting the environment variable Q2CLIDEV to any value " "will always refresh the cache when a command is run.", cls=ToolCommand) def refresh_cache(): import q2cli.core.cache q2cli.core.cache.CACHE.refresh() import_theme_help = \ ("Allows for customization of q2cli's command line styling based on an " "imported .theme (INI formatted) file. If you are unfamiliar with .ini " "formatted files look here https://en.wikipedia.org/wiki/INI_file." "\n" "\n" "The .theme file allows you to customize text on the basis of what that " "text represents with the following supported text types: command, " "option, type, default_arg, required, emphasis, problem, warning, error, " "and success. These will be your headers in the '[]' brackets. " "\n" "\n" "`command` refers to the name of the command you issued. `option` refers " "to the arguments you give to the command when running it. `type` refers " "to the QIIME 2 semantic typing of these arguments (where applicable). " "`default_arg` refers to the label next to the argument indicating its " "default value (where applicable), and if it is required (where " "applicable). `required` refers to any arguments that must be passed to " "the command for it to work and gives them special formatting on top of " "your normal `option` formatting. `emphasis` refers to any emphasized " "pieces of text within help text. `problem` refers to the text informing " "you there were issues with the command. `warning` refers to the text " "for non-fatal issues while `error` refers to the text for fatal issues." "`success` refers to text indicating a process completed as expected." "\n" "\n" "Depending on what your terminal supports, some or all of the following " "pieces of the text's formatting may be customized: bold, dim (if true " "the text's brightness is reduced), underline, blink, reverse (if true " "foreground and background colors are reversed), and finally fg " "(foreground color) and bg (background color). The first five may each " "be either true or false, while the colors may be set to any of the " "following: black, red, green, yellow, blue, magenta, cyan, white, " "bright_black, bright_red, bright_green, bright_yellow, bright_blue, " "bright_magenta, bright_cyan, or bright_white.") @dev.command(name='import-theme', short_help='Install new command line theme.', help=import_theme_help, cls=ToolCommand) @click.option('--theme', required=True, type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), help='Path to file containing new theme info') def import_theme(theme): import os import shutil from configparser import Error import q2cli.util from q2cli.core.config import CONFIG try: CONFIG.parse_file(theme) except Error as e: # If they tried to change [error] in a valid manner before we hit our # parsing error, we don't want to use their imported error settings CONFIG.styles = CONFIG.get_default_styles() header = 'Something went wrong while parsing your theme: ' q2cli.util.exit_with_error(e, header=header, traceback=None) shutil.copy(theme, os.path.join(q2cli.util.get_app_dir(), 'cli-colors.theme')) @dev.command(name='export-default-theme', short_help='Export the default settings.', help='Create a .theme (INI formatted) file from the default ' 'settings at the specified filepath.', cls=ToolCommand) @click.option('--output-path', required=True, type=click.Path(exists=False, file_okay=True, dir_okay=False, readable=True), help='Path to output the config to') def export_default_theme(output_path): import configparser from q2cli.core.config import CONFIG parser = configparser.ConfigParser() parser.read_dict(CONFIG.get_default_styles()) with open(output_path, 'w') as fh: parser.write(fh) def abort_if_false(ctx, param, value): if not value: ctx.abort() @dev.command(name='reset-theme', short_help='Reset command line theme to default.', help="Reset command line theme to default. Requres the '--yes' " "parameter to be passed asserting you do want to reset.", cls=ToolCommand) @click.option('--yes', is_flag=True, callback=abort_if_false, expose_value=False, prompt='Are you sure you want to reset your theme?') def reset_theme(): import os import q2cli.util path = os.path.join(q2cli.util.get_app_dir(), 'cli-colors.theme') if os.path.exists(path): os.unlink(path) click.echo('Theme reset.') else: click.echo('Theme was already default.') @dev.command(name='assert-result-type', short_help='Assert Result is a specific type.', help='Checks that the type of a Result matches an ' 'expected type. Intended for developer testing.', cls=ToolCommand) @click.argument('input-path', type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), metavar=_COMBO_METAVAR) @click.option('--qiime-type', required=True, help='QIIME 2 data type.') def assert_result_type(input_path, qiime_type): import q2cli.util import qiime2.sdk from q2cli.core.config import CONFIG q2cli.util.get_plugin_manager() try: result = qiime2.sdk.Result.load(input_path) except Exception as e: header = 'There was a problem loading %s as a QIIME 2 Result:' % \ input_path q2cli.util.exit_with_error(e, header=header) if str(result.type) != qiime_type: try: msg = 'Expected %s, observed %s' % (qiime_type, result.type) raise AssertionError(msg) except Exception as e: header = 'There was a problem asserting the type:' q2cli.util.exit_with_error(e, header=header) else: msg = 'The input file (%s) type and the expected type (%s)' \ ' match' % (input_path, qiime_type) click.echo(CONFIG.cfg_style('success', msg)) @dev.command(name='assert-result-data', short_help='Assert expression in Result.', help='Uses regex to check that the provided expression is present' ' in input file. Intended for developer testing.', cls=ToolCommand) @click.argument('input-path', type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), metavar=_COMBO_METAVAR) @click.option('--zip-data-path', required=True, help='The path within the zipped Result\'s data/' ' directory that should be searched.') @click.option('--expression', required=True, help='The Python regular expression to match.') def assert_result_data(input_path, zip_data_path, expression): import re import q2cli.util import qiime2.sdk from q2cli.core.config import CONFIG q2cli.util.get_plugin_manager() try: result = qiime2.sdk.Result.load(input_path) except Exception as e: header = 'There was a problem loading %s as a QIIME 2 result:' % \ input_path q2cli.util.exit_with_error(e, header=header) try: hits = sorted(result._archiver.data_dir.glob(zip_data_path)) if len(hits) != 1: data_dir = result._archiver.data_dir all_fps = sorted(data_dir.glob('**/*')) all_fps = [x.relative_to(data_dir).name for x in all_fps] raise ValueError('Value provided for zip_data_path (%s) did not ' 'produce exactly one match.\nMatches: %s\n' 'Paths observed: %s' % (zip_data_path, hits, all_fps)) except Exception as e: header = 'There was a problem locating the zip_data_path (%s)' % \ zip_data_path q2cli.util.exit_with_error(e, header=header) try: target = hits[0].read_text() match = re.search(expression, target, flags=re.MULTILINE) if match is None: raise AssertionError('Expression %r not found in %s.' % (expression, hits[0])) except Exception as e: header = 'There was a problem finding the expression.' q2cli.util.exit_with_error(e, header=header) msg = '"%s" was found in %s' % (str(expression), str(zip_data_path)) click.echo(CONFIG.cfg_style('success', msg)) q2cli-2024.5.0/q2cli/builtin/info.py000066400000000000000000000033431462552630000167510ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import click from q2cli.click.command import ToolCommand def _echo_version(): import sys import qiime2 import q2cli pyver = sys.version_info click.echo('Python version: %d.%d.%d' % (pyver.major, pyver.minor, pyver.micro)) click.echo('QIIME 2 release: %s' % qiime2.__release__) click.echo('QIIME 2 version: %s' % qiime2.__version__) click.echo('q2cli version: %s' % q2cli.__version__) def _echo_plugins(): import q2cli.core.cache plugins = q2cli.core.cache.CACHE.plugins if plugins: for name, plugin in sorted(plugins.items()): click.echo('%s: %s' % (name, plugin['version'])) else: click.secho('No plugins are currently installed.\nYou can browse ' 'the official QIIME 2 plugins at https://qiime2.org') @click.command(help='Display information about current deployment.', cls=ToolCommand) def info(): import q2cli.util # This import improves performance for repeated _echo_plugins import q2cli.core.cache click.secho('System versions', fg='green') _echo_version() click.secho('\nInstalled plugins', fg='green') _echo_plugins() click.secho('\nApplication config directory', fg='green') click.secho(q2cli.util.get_app_dir()) click.secho('\nGetting help', fg='green') click.secho('To get help with QIIME 2, visit https://qiime2.org') q2cli-2024.5.0/q2cli/builtin/tools.py000066400000000000000000001432161462552630000171620ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os from typing import Literal import click import q2cli.util from q2cli.click.command import ToolCommand, ToolGroupCommand _COMBO_METAVAR = 'ARTIFACT/VISUALIZATION' @click.group(help='Tools for working with QIIME 2 files.', cls=ToolGroupCommand) def tools(): pass @tools.command(name='export', short_help='Export data from a QIIME 2 Artifact ' 'or a Visualization', help='Exporting extracts (and optionally transforms) data ' 'stored inside an Artifact or Visualization. Note that ' 'Visualizations cannot be transformed with --output-format', cls=ToolCommand) @click.option('--input-path', required=True, metavar=_COMBO_METAVAR, type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), help='Path to file that should be exported') @click.option('--output-path', required=True, type=click.Path(exists=False, file_okay=True, dir_okay=True, writable=True), help='Path to file or directory where ' 'data should be exported to') @click.option('--output-format', required=False, help='Format which the data should be exported as. ' 'This option cannot be used with Visualizations') def export_data(input_path, output_path, output_format): import qiime2.util import qiime2.sdk import distutils from q2cli.core.config import CONFIG result = qiime2.sdk.Result.load(input_path) if output_format is None: if isinstance(result, qiime2.sdk.Artifact): output_format = result.format.__name__ else: output_format = 'Visualization' result.export_data(output_path) else: if isinstance(result, qiime2.sdk.Visualization): error = '--output-format cannot be used with visualizations' click.echo(CONFIG.cfg_style('error', error), err=True) click.get_current_context().exit(1) else: source = result.view(qiime2.sdk.parse_format(output_format)) if os.path.isfile(str(source)): if os.path.isfile(output_path): os.remove(output_path) elif os.path.dirname(output_path) == '': # This allows the user to pass a filename as a path if they # want their output in the current working directory output_path = os.path.join('.', output_path) if os.path.dirname(output_path) != '': # create directory (recursively) if it doesn't exist yet os.makedirs(os.path.dirname(output_path), exist_ok=True) qiime2.util.duplicate(str(source), output_path) else: distutils.dir_util.copy_tree(str(source), output_path) output_type = 'file' if os.path.isfile(output_path) else 'directory' success = 'Exported %s as %s to %s %s' % (input_path, output_format, output_type, output_path) click.echo(CONFIG.cfg_style('success', success)) def _print_descriptions(descriptions, tsv): if tsv: for value, description in descriptions.items(): click.echo(f"{value}\t", nl=False) if description: click.echo(_deformat_description(description)) else: click.echo() else: import textwrap tabsize = 8 for value, description in descriptions.items(): click.secho(value, bold=True) if description: description = _deformat_description(description) wrapped_description = textwrap.wrap(description, width=72-tabsize, initial_indent='\t', subsequent_indent='\t', tabsize=tabsize) for line in wrapped_description: click.echo(f"{line}") else: click.secho("\tNo description", italic=True) click.echo() def _deformat_description(description): import re deformatted = re.sub(r"[\t\n]+", ' ', description) despaced = re.sub(r" +", ' ', deformatted) return despaced def _get_matches(words, possibilities, strict=False): from difflib import get_close_matches if strict: cutoff = 1 else: cutoff = 0.6 matches = set() num_possibilities = len(possibilities) for word in words: matches.update(get_close_matches(word, possibilities, n=num_possibilities, cutoff=cutoff)) # substring search if cutoff != 1: for possibility in possibilities: if word.lower() in possibility.lower(): matches.add(possibility) return list(matches) @tools.command( name='list-types', help='List the available semantic types.', short_help='', cls=ToolCommand ) @click.argument('queries', nargs=-1) @click.option('--strict', is_flag=True, help='Show only exact matches for the type argument(s).') @click.option('--tsv', is_flag=True, help='Print as machine readable tab-separated values.') def show_types(queries, strict, tsv): pm = q2cli.util.get_plugin_manager() if len(queries) > 0: matches = _get_matches(queries, list(pm.artifact_classes), strict) else: matches = sorted(list(pm.artifact_classes)) descriptions = {} for match in matches: description = pm.artifact_classes[match].description descriptions[match] = description _print_descriptions(descriptions, tsv) @tools.command( name='list-formats', help='List the available formats.', short_help='', cls=ToolCommand ) @click.argument('queries', nargs=-1) @click.option('--importable', is_flag=True, help='List the importable formats.') @click.option('--exportable', is_flag=True, help='List the exportable formats.') @click.option('--strict', is_flag=True, help='Show only exact matches for the format argument(s).') @click.option('--tsv', is_flag=True, help='Print as machine readable tab-separated values.') def show_formats(queries, importable, exportable, strict, tsv): if importable and exportable: raise click.UsageError("'--importable' and '--exportable' flags are " "mutually exclusive.") if not importable and not exportable: raise click.UsageError("One of '--importable' or '--exportable' flags " "is required.") pm = q2cli.util.get_plugin_manager() portable_formats = pm.importable_formats if importable \ else pm.exportable_formats if len(queries) > 0: matches = _get_matches(queries, portable_formats.keys(), strict) else: matches = sorted(portable_formats.keys()) descriptions = {} for match in matches: docstring = portable_formats[match].format.__doc__ first_docstring_line = docstring.split('\n\n')[0].strip() \ if docstring else '' descriptions[match] = first_docstring_line _print_descriptions(descriptions, tsv) @tools.command(name='import', short_help='Import data into a new QIIME 2 Artifact.', help="Import data to create a new QIIME 2 Artifact. See " "https://docs.qiime2.org/ for usage examples and details " "on the file types and associated semantic types that can " "be imported.", cls=ToolCommand) @click.option('--type', required=True, help='The semantic type of the artifact that will be created ' 'upon importing. Use `qiime tools list-types` to see what ' 'importable semantic types are available in the current ' 'deployment.') @click.option('--input-path', required=True, type=click.Path(exists=True, file_okay=True, dir_okay=True, readable=True), help='Path to file or directory that should be imported.') @click.option('--output-path', required=True, metavar='ARTIFACT', type=click.Path(exists=False, file_okay=True, dir_okay=False, writable=True), help='Path where output artifact should be written.') @click.option('--input-format', required=False, help='The format of the data to be imported. If not provided, ' 'data must be in the format expected by the semantic type ' 'provided via --type. Use `qiime tools list-formats ' '--importable` to see which formats of input data are ' 'importable.') @click.option('--validate-level', default='max', type=click.Choice(['min', 'max']), help='How much to validate the imported data before creating the' ' artifact. A value of "max" will generally read the entire' ' file or directory, whereas "min" will not usually do so.' ' [default: "max"]') def import_data(type, input_path, output_path, input_format, validate_level): from q2cli.core.config import CONFIG artifact = _import(type, input_path, input_format, validate_level) artifact.save(output_path) if input_format is None: input_format = artifact.format.__name__ success = 'Imported %s as %s to %s' % (input_path, input_format, output_path) click.echo(CONFIG.cfg_style('success', success)) @tools.command(short_help='Take a peek at a QIIME 2 Artifact or ' 'Visualization.', help="Display basic information about a QIIME 2 Artifact or " "Visualization, including its UUID and type.", cls=ToolCommand) @click.argument('paths', nargs=-1, required=True, type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), metavar=_COMBO_METAVAR) @click.option('--tsv/--no-tsv', default=False, help='Print as machine-readable tab-separated values.') def peek(paths, tsv): import qiime2.sdk from q2cli.core.config import CONFIG metadatas = {os.path.basename(path): qiime2.sdk.Result.peek(path) for path in paths} if tsv: click.echo("Filename\tType\tUUID\tData Format") for path, m in metadatas.items(): click.echo(f"{path}\t{m.type}\t{m.uuid}\t{m.format}") elif len(metadatas) == 1: metadata = metadatas[os.path.basename(paths[0])] click.echo(CONFIG.cfg_style('type', "UUID")+": ", nl=False) click.echo(metadata.uuid) click.echo(CONFIG.cfg_style('type', "Type")+": ", nl=False) click.echo(metadata.type) if metadata.format is not None: click.echo(CONFIG.cfg_style('type', "Data format")+": ", nl=False) click.echo(metadata.format) else: COLUMN_FILENAME = "Filename" COLUMN_TYPE = "Type" COLUMN_UUID = "UUID" COLUMN_DATA_FORMAT = "Data Format" filename_width = max([len(p) for p in paths] + [len(COLUMN_FILENAME)]) type_width = max([len(i.type) for i in metadatas.values()] + [len(COLUMN_TYPE)]) uuid_width = max([len(i.uuid) for i in metadatas.values()] + [len(COLUMN_UUID)]) data_format_width = \ max([len(i.format) if i.format is not None else 0 for i in metadatas.values()] + [len(COLUMN_DATA_FORMAT)]) padding = 2 format_string = f"{{f:<{filename_width + padding}}} " + \ f"{{t:<{type_width + padding}}} " + \ f"{{u:<{uuid_width + padding}}} " + \ f"{{d:<{data_format_width + padding}}}" click.secho( format_string.format( f=COLUMN_FILENAME, t=COLUMN_TYPE, u=COLUMN_UUID, d=COLUMN_DATA_FORMAT), bold=True, fg="green") for path, m in metadatas.items(): click.echo( format_string.format( f=path, t=m.type, u=m.uuid, d=(m.format if m.format is not None else 'N/A'))) _COLUMN_TYPES = ['categorical', 'numeric'] @tools.command(name='cast-metadata', short_help='Designate metadata column types.', help='Designate metadata column types.' ' Supported column types are as follows: %s.' ' Providing multiple file paths to this command will merge' ' the metadata.' % (', '.join(_COLUMN_TYPES)), cls=ToolCommand) @click.option('--cast', required=True, metavar='COLUMN:TYPE', multiple=True, help='Parameter for each metadata column that should' ' be cast as a specified column type (supported types are as' ' follows: %s). The required formatting for this' ' parameter is --cast COLUMN:TYPE, repeated for each column' ' and the associated column type it should be cast to in' ' the output.' % (', '.join(_COLUMN_TYPES))) @click.option('--ignore-extra', is_flag=True, help='If this flag is enabled, cast parameters that do not' ' correspond to any of the column names within the provided' ' metadata will be ignored.') @click.option('--error-on-missing', is_flag=True, help='If this flag is enabled, failing to include cast' ' parameters for all columns in the provided metadata will' ' result in an error.') @click.option('--output-file', required=False, type=click.Path(exists=False, file_okay=True, dir_okay=False, writable=True), help='Path to file where the modified metadata should be' ' written to.') @click.argument('paths', nargs=-1, required=True, metavar='METADATA...', type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True)) def cast_metadata(paths, cast, output_file, ignore_extra, error_on_missing): import tempfile from qiime2 import Metadata, metadata md = _merge_metadata(paths) cast_dict = {} try: for casting in cast: if ':' not in casting: raise click.BadParameter( message=f'Missing `:` in --cast {casting}', param_hint='cast') splitter = casting.split(':') if len(splitter) != 2: raise click.BadParameter( message=f'Incorrect number of fields in --cast {casting}.' f' Observed {len(splitter)}' f' {tuple(splitter)}, expected 2.', param_hint='cast') col, type_ = splitter if col in cast_dict: raise click.BadParameter( message=(f'Column name "{col}" appears in cast more than' ' once.'), param_hint='cast') cast_dict[col] = type_ except Exception as err: header = \ ('Could not parse provided cast arguments into unique COLUMN:TYPE' ' pairs. Please make sure all cast flags are of the format --cast' ' COLUMN:TYPE') q2cli.util.exit_with_error(err, header=header) types = set(cast_dict.values()) if not types.issubset(_COLUMN_TYPES): raise click.BadParameter( message=('Unknown column type provided. Please make sure all' ' columns included in your cast contain a valid column' ' type. Valid types: %s' % (', '.join(_COLUMN_TYPES))), param_hint='cast') column_names = set(md.columns.keys()) cast_names = set(cast_dict.keys()) if not ignore_extra: if not cast_names.issubset(column_names): cast = cast_names.difference(column_names) raise click.BadParameter( message=('The following cast columns were not found' ' within the metadata: %s' % (', '.join(cast))), param_hint='cast') if error_on_missing: if not column_names.issubset(cast_names): cols = column_names.difference(cast_names) raise click.BadParameter( message='The following columns within the metadata' ' were not provided in the cast: %s' % (', '.join(cols)), param_hint='cast') # Remove entries from the cast dict that are not in the metadata to avoid # errors further down the road for cast in cast_names: if cast not in column_names: cast_dict.pop(cast) with tempfile.NamedTemporaryFile() as temp: md.save(temp.name) try: cast_md = Metadata.load(temp.name, cast_dict) except metadata.io.MetadataFileError as e: raise click.BadParameter(message=e, param_hint='cast') from e if output_file: cast_md.save(output_file) else: with tempfile.NamedTemporaryFile(mode='w+') as stdout_temp: cast_md.save(stdout_temp.name) stdout_str = stdout_temp.read() click.echo(stdout_str) @tools.command(name='inspect-metadata', short_help='Inspect columns available in metadata.', help='Inspect metadata files or artifacts viewable as metadata.' ' Providing multiple file paths to this command will merge' ' the metadata.', cls=ToolCommand) @click.option('--tsv/--no-tsv', default=False, help='Print as machine-readable TSV instead of text.') @click.argument('paths', nargs=-1, required=True, metavar='METADATA...', type=click.Path(file_okay=True, dir_okay=False, readable=True)) @q2cli.util.pretty_failure(traceback=None) def inspect_metadata(paths, tsv, failure): metadata = _merge_metadata(paths) # we aren't expecting errors below this point, so set traceback to default failure.traceback = 'stderr' failure.header = "An unexpected error has occurred:" COLUMN_NAME = "COLUMN NAME" COLUMN_TYPE = "TYPE" max_name_len = max([len(n) for n in metadata.columns] + [len(COLUMN_NAME)]) max_type_len = max([len(p.type) for p in metadata.columns.values()] + [len(COLUMN_TYPE)]) if tsv: import csv import io def formatter(*row): # This is gross, but less gross than robust TSV writing. with io.StringIO() as fh: writer = csv.writer(fh, dialect='excel-tab', lineterminator='') writer.writerow(row) return fh.getvalue() else: formatter = ("{0:>%d} {1:%d}" % (max_name_len, max_type_len)).format click.secho(formatter(COLUMN_NAME, COLUMN_TYPE), bold=True) if not tsv: click.secho(formatter("=" * max_name_len, "=" * max_type_len), bold=True) for name, props in metadata.columns.items(): click.echo(formatter(name, props.type)) if not tsv: click.secho(formatter("=" * max_name_len, "=" * max_type_len), bold=True) click.secho(("{0:>%d} " % max_name_len).format("IDS:"), bold=True, nl=False) click.echo(metadata.id_count) click.secho(("{0:>%d} " % max_name_len).format("COLUMNS:"), bold=True, nl=False) click.echo(metadata.column_count) def _merge_metadata(paths): m = [q2cli.util.load_metadata(p) for p in paths] metadata = m[0] if m[1:]: metadata = metadata.merge(*m[1:]) return metadata @tools.command(short_help='View a QIIME 2 Visualization.', help="Displays a QIIME 2 Visualization until the command " "exits. To open a QIIME 2 Visualization so it can be " "used after the command exits, use 'qiime tools extract'.", cls=ToolCommand) @click.argument('visualization-path', metavar='VISUALIZATION', type=click.Path(file_okay=True, dir_okay=False, readable=True)) @click.option('--index-extension', required=False, default='html', help='The extension of the index file that should be opened. ' '[default: html]') def view(visualization_path, index_extension): # Guard headless envs from having to import anything large import sys from qiime2 import Visualization from q2cli.util import _load_input from q2cli.core.config import CONFIG if not os.getenv("DISPLAY") and sys.platform != "darwin": raise click.UsageError( 'Visualization viewing is currently not supported in headless ' 'environments. You can view Visualizations (and Artifacts) at ' 'https://view.qiime2.org, or move the Visualization to an ' 'environment with a display and view it with `qiime tools view`.') if index_extension.startswith('.'): index_extension = index_extension[1:] _, visualization = _load_input(visualization_path, view=True)[0] if not isinstance(visualization, Visualization): raise click.BadParameter( '%s is not a QIIME 2 Visualization. Only QIIME 2 Visualizations ' 'can be viewed.' % visualization_path) index_paths = visualization.get_index_paths(relative=False) if index_extension not in index_paths: raise click.BadParameter( 'No index %s file is present in the archive. Available index ' 'extensions are: %s' % (index_extension, ', '.join(index_paths.keys()))) else: index_path = index_paths[index_extension] launch_status = click.launch(index_path) if launch_status != 0: click.echo(CONFIG.cfg_style('error', 'Viewing visualization ' 'failed while attempting to open ' f'{index_path}'), err=True) else: while True: click.echo( "Press the 'q' key, Control-C, or Control-D to quit. This " "view may no longer be accessible or work correctly after " "quitting.", nl=False) # There is currently a bug in click.getchar where translation # of Control-C and Control-D into KeyboardInterrupt and # EOFError (respectively) does not work on Python 3. The code # here should continue to work as expected when the bug is # fixed in Click. # # https://github.com/pallets/click/issues/583 try: char = click.getchar() click.echo() if char in {'q', '\x03', '\x04'}: break except (KeyboardInterrupt, EOFError): break @tools.command(short_help="Extract a QIIME 2 Artifact or Visualization " "archive.", help="Extract all contents of a QIIME 2 Artifact or " "Visualization's archive, including provenance, metadata, " "and actual data. Use 'qiime tools export' to export only " "the data stored in an Artifact or Visualization, with " "the choice of exporting to different formats.", cls=ToolCommand) @click.option('--input-path', required=True, metavar=_COMBO_METAVAR, type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), help='Path to file that should be extracted') @click.option('--output-path', required=False, type=click.Path(exists=False, file_okay=False, dir_okay=True, writable=True), help='Directory where archive should be extracted to ' '[default: current working directory]', default=os.getcwd()) def extract(input_path, output_path): import zipfile import qiime2.sdk from q2cli.core.config import CONFIG try: extracted_dir = qiime2.sdk.Result.extract(input_path, output_path) except (zipfile.BadZipFile, ValueError): raise click.BadParameter( '%s is not a valid QIIME 2 Result. Only QIIME 2 Artifacts and ' 'Visualizations can be extracted.' % input_path) else: success = 'Extracted %s to directory %s' % (input_path, extracted_dir) click.echo(CONFIG.cfg_style('success', success)) @tools.command(short_help='Validate data in a QIIME 2 Artifact.', help='Validate data in a QIIME 2 Artifact. QIIME 2 ' 'automatically performs some basic validation when ' 'managing your data; use this command to perform explicit ' 'and/or more thorough validation of your data (e.g. when ' 'debugging issues with your data or analyses).\n\nNote: ' 'validation can take some time to complete, depending on ' 'the size and type of your data.', cls=ToolCommand) @click.argument('path', type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), metavar=_COMBO_METAVAR) @click.option('--level', required=False, type=click.Choice(['min', 'max']), help='Desired level of validation. "min" will perform minimal ' 'validation, and "max" will perform maximal validation (at ' 'the potential cost of runtime).', default='max', show_default=True) def validate(path, level): import qiime2.sdk from q2cli.core.config import CONFIG try: result = qiime2.sdk.Result.load(path) except Exception as e: header = 'There was a problem loading %s as a QIIME 2 Result:' % path q2cli.util.exit_with_error(e, header=header) try: result.validate(level) except qiime2.plugin.ValidationError as e: header = 'Result %s does not appear to be valid at level=%s:' % ( path, level) q2cli.util.exit_with_error(e, header=header, traceback=None) except Exception as e: header = ('An unexpected error has occurred while attempting to ' 'validate result %s:' % path) q2cli.util.exit_with_error(e, header=header) else: click.echo(CONFIG.cfg_style('success', f'Result {path} appears to be ' f'valid at level={level}.')) @tools.command(short_help='Print citations for a QIIME 2 result.', help='Print citations as a BibTex file (.bib) for a QIIME 2' ' result.', cls=ToolCommand) @click.argument('path', type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), metavar=_COMBO_METAVAR) def citations(path): import qiime2.sdk import io from q2cli.core.config import CONFIG ctx = click.get_current_context() try: result = qiime2.sdk.Result.load(path) except Exception as e: header = 'There was a problem loading %s as a QIIME 2 result:' % path q2cli.util.exit_with_error(e, header=header) if result.citations: with io.StringIO() as fh: result.citations.save(fh) click.echo(fh.getvalue(), nl=False) ctx.exit(0) else: click.echo(CONFIG.cfg_style('problem', 'No citations found.'), err=True) ctx.exit(1) @tools.command(name='cache-create', short_help='Create an empty cache at the given location.', help='Create an empty cache at the given location.', cls=ToolCommand) @click.option('--cache', required=True, type=click.Path(exists=False, readable=True), help='Path to a nonexistent directory to be created as a cache.') def cache_create(cache): from qiime2.core.cache import Cache from q2cli.core.config import CONFIG try: Cache(cache) except Exception as e: header = "There was a problem creating a cache at '%s':" % cache q2cli.util.exit_with_error(e, header=header, traceback=None) success = "Created cache at '%s'" % cache click.echo(CONFIG.cfg_style('success', success)) @tools.command(name='cache-remove', short_help='Removes a given key from a cache.', help='Removes a given key from a cache then runs garbage ' 'collection on the cache.', cls=ToolCommand) @click.option('--cache', required=True, type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True), help='Path to an existing cache to remove the key from.') @click.option('--key', required=True, help='The key to remove from the cache.') def cache_remove(cache, key): from qiime2.core.cache import Cache from q2cli.core.config import CONFIG try: _cache = Cache(cache) _cache.remove(key) except Exception as e: header = "There was a problem removing the key '%s' from the " \ "cache '%s':" % (key, cache) q2cli.util.exit_with_error(e, header=header, traceback=None) success = "Removed key '%s' from cache '%s'" % (key, cache) click.echo(CONFIG.cfg_style('success', success)) @tools.command(name='cache-garbage-collection', short_help='Runs garbage collection on the cache at the ' 'specified location.', help='Runs garbage collection on the cache at the specified ' 'location if the specified location is a cache.', cls=ToolCommand) @click.option('--cache', required=True, type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True), help='Path to an existing cache to run garbage collection on.') def cache_garbage_collection(cache): from qiime2.core.cache import Cache from q2cli.core.config import CONFIG try: _cache = Cache(cache) _cache.garbage_collection() except Exception as e: header = "There was a problem running garbage collection on the " \ "cache at '%s':" % cache q2cli.util.exit_with_error(e, header=header, traceback=None) success = "Ran garbage collection on cache at '%s'" % cache click.echo(CONFIG.cfg_style('success', success)) @tools.command(name='cache-store', short_help='Stores a .qza in the cache under a key.', help='Stores a .qza in the cache under a key.', cls=ToolCommand) @click.option('--cache', required=True, type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True), help='Path to an existing cache to save into.') @click.option('--artifact-path', required=True, type=click.Path(exists=True, file_okay=True, dir_okay=False, readable=True), help='Path to a .qza to save into the cache.') @click.option('--key', required=True, help='The key to save the artifact under (must be a valid ' 'Python identifier).') def cache_store(cache, artifact_path, key): from qiime2.sdk.result import Result from qiime2.core.cache import Cache from q2cli.core.config import CONFIG try: artifact = Result.load(artifact_path) _cache = Cache(cache) _cache.save(artifact, key) except Exception as e: header = "There was a problem saving the artifact '%s' to the cache " \ "'%s' under the key '%s':" % (artifact_path, cache, key) q2cli.util.exit_with_error(e, header=header, traceback=None) success = "Saved the artifact '%s' to the cache '%s' under the key " \ "'%s'" % (artifact_path, cache, key) click.echo(CONFIG.cfg_style('success', success)) @tools.command(name='cache-import', short_help='Imports data into an Artifact in the cache under a ' 'key.', help='Imports data into an Artifact in the cache under a key.', cls=ToolCommand) @click.option('--type', required=True, help='The semantic type of the artifact that will be created ' 'upon importing. Use `qiime tools list-types` to see what ' 'importable semantic types are available in the current ' 'deployment.') @click.option('--input-path', required=True, type=click.Path(exists=True, file_okay=True, dir_okay=True, readable=True), help='Path to file or directory that should be imported.') @click.option('--cache', required=True, type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True), help='Path to an existing cache to save into.') @click.option('--key', required=True, help='The key to save the artifact under (must be a valid ' 'Python identifier).') @click.option('--input-format', required=False, help='The format of the data to be imported. If not provided, ' 'data must be in the format expected by the semantic type ' 'provided via --type. Use `qiime tools list-formats ' '--importable` to see which formats of input data are ' 'importable.') @click.option('--validate-level', required=False, default='max', type=click.Choice(['min', 'max']), help='How much to validate the imported data before creating the' ' artifact. A value of "max" will generally read the entire' ' file or directory, whereas "min" will not usually do so.' ' [default: "max"]') def cache_import(type, input_path, cache, key, input_format, validate_level): from qiime2 import Cache from q2cli.core.config import CONFIG artifact = _import(type, input_path, input_format, validate_level) _cache = Cache(cache) _cache.save(artifact, key) if input_format is None: input_format = artifact.format.__name__ success = 'Imported %s as %s to %s:%s' % (input_path, input_format, cache, key) click.echo(CONFIG.cfg_style('success', success)) def _import(type, input_path, input_format, validate_level): import qiime2.sdk import qiime2.plugin try: artifact = qiime2.sdk.Artifact.import_data( type, input_path, view_type=input_format, validate_level=validate_level) except qiime2.plugin.ValidationError as e: header = 'There was a problem importing %s:' % input_path q2cli.util.exit_with_error(e, header=header, traceback=None) except Exception as e: header = 'An unexpected error has occurred:' q2cli.util.exit_with_error(e, header=header) return artifact @tools.command(name='cache-fetch', short_help='Fetches an artifact out of a cache into a .qza.', help='Fetches the artifact saved to the specified cache under ' 'the specified key into a .qza at the specified location.', cls=ToolCommand) @click.option('--cache', required=True, type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True), help='Path to an existing cache to load from.') @click.option('--key', required=True, help='The key to the artifact being loaded.') @click.option('--output-path', required=True, type=click.Path(exists=False, readable=True), help='Path to put the .qza we are loading the artifact into.') def cache_fetch(cache, key, output_path): from qiime2.core.cache import Cache from q2cli.core.config import CONFIG try: _cache = Cache(cache) artifact = _cache.load(key) artifact.save(output_path) except Exception as e: header = "There was a problem loading the artifact with the key " \ "'%s' from the cache '%s' and saving it to the file '%s':" % \ key, cache, output_path q2cli.util.exit_with_error(e, header=header, traceback=None) success = "Loaded artifact with the key '%s' from the cache '%s' and " \ "saved it to the file '%s'" % (key, cache, output_path) click.echo(CONFIG.cfg_style('success', success)) @tools.command(name='cache-status', short_help='Checks the status of the cache.', help='Lists all keys in the given cache. Peeks artifacts ' 'pointed to by keys to data and lists the number of ' 'artifacts in the pool for keys to pools.', cls=ToolCommand) @click.option('--cache', required=True, type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True), help='Path to an existing cache to check the status of.') def cache_status(cache): from qiime2.core.cache import Cache from qiime2.sdk.result import Result from q2cli.core.config import CONFIG data_output = [] pool_output = [] try: _cache = Cache(cache) with _cache.lock: for key in _cache.get_keys(): key_values = _cache.read_key(key) if 'data' in key_values: data = key_values['data'] data_output.append( 'data: %s -> %s' % (key, str(Result.peek(_cache.data / data)))) elif 'pool' in key_values: pool = key_values['pool'] pool_output.append( 'pool: %s -> size = %s' % (key, str(len(os.listdir(_cache.pools / pool))))) except Exception as e: header = "There was a problem getting the status of the cache at " \ "path '%s':" % cache q2cli.util.exit_with_error(e, header=header, traceback=None) if not data_output: data_output = 'No data keys in cache' else: data_output = '\n'.join(data_output) data_output = 'Data keys in cache:\n' + data_output if not pool_output: pool_output = 'No pool keys in cache' else: pool_output = '\n'.join(pool_output) pool_output = 'Pool keys in cache:\n' + pool_output output = data_output + '\n\n' + pool_output success = "Status of the cache at the path '%s':\n\n%s" % \ (cache, output) click.echo(CONFIG.cfg_style('success', success)) replay_in_fp_help = ( 'filepath to a QIIME 2 Archive (.qza or .qzv) or directory of Archives' ) replay_recurse_help = ( 'if in-fp is a directory, will also search sub-directories when finding ' 'Archives to parse' ) replay_validate_checksums_help = ( 'check that replayed archives are intact and uncorrupted' ) replay_parse_metadata_help = ( 'parse the original metadata captured in provenance for review or replay' ) replay_use_recorded_metadata_help = ( 're-use the original metadata captured in provenance' ) replay_suppress_header_help = ( 'do not write header/footer blocks in the output script' ) replay_verbose_help = ( 'print status messages to stdout while processing' ) replay_dump_recorded_metadata_help = ( 'write the original metadata captured in provenance to disk in the ' '--metadata-out-dir directory' ) @tools.command(name='replay-provenance', cls=ToolCommand) @click.option('--in-fp', required=True, help=replay_in_fp_help) @click.option('--recurse/--no-recurse', default=False, show_default=True, help=replay_recurse_help) @click.option('--usage-driver', default='cli', show_default=True, help='the target interface for your replay script', type=click.Choice(['python3', 'cli'], case_sensitive=False)) @click.option('--validate-checksums/--no-validate-checksums', default=True, show_default=True, help=replay_validate_checksums_help) @click.option('--parse-metadata/--no-parse-metadata', default=True, show_default=True, help=replay_parse_metadata_help) @click.option('--use-recorded-metadata/--no-use-recorded-metadata', default=False, show_default=True, help=replay_use_recorded_metadata_help) @click.option('--suppress-header/--no-suppress-header', default=False, show_default=True, help=replay_suppress_header_help) @click.option('--verbose/--no-verbose', default=True, show_default=True, help=replay_verbose_help) @click.option('--dump-recorded-metadata/--no-dump-recorded-metadata', default=True, show_default=True, help=replay_dump_recorded_metadata_help) @click.option('--metadata-out-dir', default='', show_default=True, help=('the directory where captured study metadata ' 'should be written if --dump-recorded-metadata. This ' 'often produces many outputs, so a dedicated directory ' 'should generally be used. Creates the directory if it ' 'does not already exist. By default, metadata is written ' 'to `${PWD}/recorded_metadata/`')) @click.option('--out-fp', required=True, type=click.Path(exists=False, writable=True), help='the filepath where your replay script should be written') def provenance_replay( in_fp: str, out_fp: str, usage_driver: Literal['python3', 'cli'], recurse: bool = False, validate_checksums: bool = True, parse_metadata: bool = True, use_recorded_metadata: bool = False, suppress_header: bool = False, verbose: bool = True, dump_recorded_metadata: bool = True, metadata_out_dir: str = '' ): """ Replay provenance from a QIIME 2 Artifact filepath to a written executable """ from qiime2.core.archive.provenance_lib.replay import replay_provenance from qiime2.sdk.util import get_available_usage_drivers usage_drivers = get_available_usage_drivers() try: usage_driver_type = usage_drivers[usage_driver] except KeyError: msg = ( f'The {usage_driver} usage driver is not available in the ' 'current environment.' ) raise ValueError(msg) replay_provenance( usage_driver=usage_driver_type, payload=in_fp, out_fp=out_fp, validate_checksums=validate_checksums, parse_metadata=parse_metadata, recurse=recurse, use_recorded_metadata=use_recorded_metadata, suppress_header=suppress_header, verbose=verbose, dump_recorded_metadata=dump_recorded_metadata, md_out_dir=metadata_out_dir ) filename = os.path.realpath(out_fp) click.echo(f'{usage_driver} replay script written to {filename}') @tools.command(name='replay-citations', cls=ToolCommand) @click.option('--in-fp', required=True, help=replay_in_fp_help) @click.option('--recurse/--no-recurse', default=False, show_default=True, help=replay_recurse_help) @click.option('--deduplicate/--no-deduplicate', default=True, show_default=True, help=('If deduplicate, duplicate citations will be removed ' 'heuristically, e.g. by comparing DOI fields. ' 'This greatly reduces manual curation of reference lists, ' 'but introduces a small risk of reference loss.')) @click.option('--suppress-header/--no-suppress-header', default=False, show_default=True, help=replay_suppress_header_help) @click.option('--verbose/--no-verbose', default=True, show_default=True, help=replay_verbose_help) @click.option('--out-fp', required=True, type=click.Path(exists=False, writable=True), help='the filepath where your bibtex file should be written') def citations_replay( in_fp: str, out_fp: str, recurse: bool = False, deduplicate: bool = True, suppress_header: bool = False, verbose: bool = True ): """ Reports all citations from a QIIME 2 Artifact or directory of Artifacts, with the goal of improving and simplifying attribution of/in published work. Not for use in reporting e.g. software versions used in an analysis, as deduplication removes duplicate references with different plugin versions. """ from qiime2.core.archive.provenance_lib.parse import ProvDAG from qiime2.core.archive.provenance_lib.replay import replay_citations dag = ProvDAG(in_fp, verbose=verbose, recurse=recurse) replay_citations( dag, out_fp=out_fp, deduplicate=deduplicate, suppress_header=suppress_header ) filename = os.path.realpath(out_fp) click.echo(f'citations bibtex file written to {filename}') @tools.command(name='replay-supplement', cls=ToolCommand) @click.option('--in-fp', required=True, help='filepath to a QIIME 2 Archive or directory of Archives') @click.option('--recurse/--no-recurse', default=False, show_default=True, help=('if in-fp is a directory, will also search sub-directories' ' when finding .qza/.qzv files to parse')) @click.option('--deduplicate/--no-deduplicate', default=True, show_default=True, help=('If deduplicate, duplicate citations will be removed ' 'heuristically, e.g. by comparing DOI fields. ' 'This greatly reduces manual curation of reference lists, ' 'but introduces a small risk of reference loss.')) @click.option('--validate-checksums/--no-validate-checksums', default=True, show_default=True, help=replay_validate_checksums_help) @click.option('--parse-metadata/--no-parse-metadata', default=True, show_default=True, help=replay_parse_metadata_help) @click.option('--use-recorded-metadata/--no-use-recorded-metadata', default=False, show_default=True, help=replay_use_recorded_metadata_help) @click.option('--suppress-header/--no-suppress-header', default=False, show_default=True, help=replay_suppress_header_help) @click.option('--verbose/--no-verbose', default=True, show_default=True, help=replay_verbose_help) @click.option('--dump-recorded-metadata/--no-dump-recorded-metadata', default=True, show_default=True, help='write the original metadata captured in provenance to ' 'recorded_metadata/ inside the archive') @click.option('--out-fp', required=True, type=click.Path(exists=False, writable=True), help='the filepath where your reproduciblity supplement zipfile ' 'should be written') def supplement_replay( in_fp: str, out_fp: str, validate_checksums: bool = True, parse_metadata: bool = True, use_recorded_metadata: bool = False, recurse: bool = False, deduplicate: bool = True, suppress_header: bool = False, verbose: bool = True, dump_recorded_metadata: bool = True ): """ Produces a zipfile package of useful documentation supporting in silico reproducibility of some QIIME 2 Result(s) from a QIIME 2 Artifact or directory of Artifacts. Package includes: - replay scripts for all supported interfaces - a bibtex-formatted collection of all citations """ from qiime2.core.archive.provenance_lib.replay import replay_supplement from qiime2.sdk.util import get_available_usage_drivers usage_drivers = get_available_usage_drivers() usage_driver_types = list(usage_drivers.values()) if not usage_driver_types: msg = ( 'There are no available usage drivers registered in the current ' 'environment.' ) raise ValueError(msg) replay_supplement( usage_drivers=usage_driver_types, payload=in_fp, out_fp=out_fp, validate_checksums=validate_checksums, parse_metadata=parse_metadata, use_recorded_metadata=use_recorded_metadata, recurse=recurse, deduplicate=deduplicate, suppress_header=suppress_header, verbose=verbose, dump_recorded_metadata=dump_recorded_metadata ) q2cli-2024.5.0/q2cli/click/000077500000000000000000000000001462552630000150605ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/click/__init__.py000066400000000000000000000005351462552630000171740ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- q2cli-2024.5.0/q2cli/click/command.py000066400000000000000000000327131462552630000170560ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- # ---------------------------------------------------------------------------- # Some of the source code in this file is derived from original work: # # Copyright (c) 2014 by the Pallets team. # # To see the license for the original work, see licenses/click.LICENSE.rst # Specific reproduction and derivation of original work is marked below. # ---------------------------------------------------------------------------- import click import click.core class BaseCommandMixin: # Modified from original: # < https://github.com/pallets/click/blob/ # c6042bf2607c5be22b1efef2e42a94ffd281434c/click/core.py#L867 > # Copyright (c) 2014 by the Pallets team. def make_parser(self, ctx): """Creates the underlying option parser for this command.""" from .parser import Q2Parser parser = Q2Parser(ctx) for param in self.get_params(ctx): param.add_to_parser(parser, ctx) return parser # Modified from original: # < https://github.com/pallets/click/blob/ # c6042bf2607c5be22b1efef2e42a94ffd281434c/click/core.py#L934 > # Copyright (c) 2014 by the Pallets team. def parse_args(self, ctx, args): from q2cli.core.config import CONFIG if isinstance(self, click.MultiCommand): return super().parse_args(ctx, args) errors = [] parser = self.make_parser(ctx) skip_rest = False for _ in range(10): # surely this is enough attempts try: opts, args, param_order = parser.parse_args(args=args) break except click.ClickException as e: errors.append(e) skip_rest = True if not skip_rest: for param in click.core.iter_params_for_processing( param_order, self.get_params(ctx)): try: value, args = param.handle_parse_result(ctx, opts, args) except click.ClickException as e: errors.append(e) if args and not ctx.allow_extra_args and not ctx.resilient_parsing: errors.append(click.UsageError( 'Got unexpected extra argument%s (%s)' % (len(args) != 1 and 's' or '', ' '.join(map(click.core.make_str, args))))) if errors: click.echo(ctx.get_help()+"\n", err=True) if len(errors) > 1: problems = 'There were some problems with the command:' else: problems = 'There was a problem with the command:' click.echo(CONFIG.cfg_style('problem', problems.center(78, ' ')), err=True) for idx, e in enumerate(errors, 1): msg = click.formatting.wrap_text( e.format_message(), initial_indent=' (%d/%d%s) ' % (idx, len(errors), '?' if skip_rest else ''), subsequent_indent=' ') click.echo(CONFIG.cfg_style('error', msg), err=True) ctx.exit(1) ctx.args = args return args def get_option_names(self, ctx): if not hasattr(self, '__option_names'): names = set() for param in self.get_params(ctx): if hasattr(param, 'q2_name'): names.add(param.q2_name) else: names.add(param.name) self.__option_names = names return self.__option_names def list_commands(self, ctx): if not hasattr(super(), 'list_commands'): return [] return super().list_commands(ctx) def get_opt_groups(self, ctx): return {'Options': list(self.get_params(ctx))} def format_help_text(self, ctx, formatter): super().format_help_text(ctx, formatter) formatter.write_paragraph() # Modified from original: # < https://github.com/pallets/click/blob # /c6042bf2607c5be22b1efef2e42a94ffd281434c/click/core.py#L830 > # Copyright (c) 2014 by the Pallets team. def format_usage(self, ctx, formatter): from q2cli.core.config import CONFIG """Writes the usage line into the formatter.""" pieces = self.collect_usage_pieces(ctx) formatter.write_usage(CONFIG.cfg_style('command', ctx.command_path), ' '.join(pieces)) def format_options(self, ctx, formatter, COL_MAX=23, COL_MIN=10): from q2cli.core.config import CONFIG # write options opt_groups = {} records = [] for group, options in self.get_opt_groups(ctx).items(): opt_records = [] for o in options: record = o.get_help_record(ctx) if record is None: continue opt_records.append((o, record)) records.append(record) opt_groups[group] = opt_records first_columns = (r[0] for r in records) border = min(COL_MAX, max(COL_MIN, *(len(col) for col in first_columns if len(col) < COL_MAX))) for opt_group, opt_records in opt_groups.items(): if not opt_records: continue formatter.write_heading(click.style(opt_group, bold=True)) formatter.indent() padded_border = border + formatter.current_indent for opt, record in opt_records: self.write_option(ctx, formatter, opt, record, padded_border) formatter.dedent() # Modified from original: # https://github.com/pallets/click/blob # /c6042bf2607c5be22b1efef2e42a94ffd281434c/click/core.py#L1056 # Copyright (c) 2014 by the Pallets team. commands = [] for subcommand in self.list_commands(ctx): cmd = self.get_command(ctx, subcommand) # What is this, the tool lied about a command. Ignore it if cmd is None: continue if cmd.hidden: continue commands.append((subcommand, cmd)) # allow for 3 times the default spacing if len(commands): limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands) rows = [] for subcommand, cmd in commands: help = cmd.get_short_help_str(limit) rows.append((CONFIG.cfg_style('command', subcommand), help)) if rows: with formatter.section(click.style('Commands', bold=True)): formatter.write_dl(rows) def write_option(self, ctx, formatter, opt, record, border, COL_SPACING=2): import itertools from q2cli.core.config import CONFIG full_width = formatter.width - formatter.current_indent indent_text = ' ' * formatter.current_indent opt_text, help_text = record opt_text_secondary = None if type(opt_text) is tuple: opt_text, opt_text_secondary = opt_text help_text, requirements = self._clean_help(help_text) type_placement = None type_repr = None type_indent = 2 * indent_text if hasattr(opt.type, 'get_type_repr'): type_repr = opt.type.get_type_repr(opt) if type_repr is not None: if len(type_repr) <= border - len(type_indent): type_placement = 'under' else: type_placement = 'beside' if len(opt_text) > border: lines = simple_wrap(opt_text, full_width) else: lines = [opt_text.split(' ')] if opt_text_secondary is not None: lines.append(opt_text_secondary.split(' ')) to_write = [] for tokens in lines: dangling_edge = formatter.current_indent styled = [] for token in tokens: dangling_edge += len(token) + 1 if token.startswith('--'): token = CONFIG.cfg_style('option', token, required=opt.required) styled.append(token) line = indent_text + ' '.join(styled) to_write.append(line) formatter.write('\n'.join(to_write)) dangling_edge -= 1 if type_placement == 'beside': lines = simple_wrap(type_repr, formatter.width - len(type_indent), start_col=dangling_edge - 1) to_write = [] first_iter = True for tokens in lines: line = ' '.join(tokens) if first_iter: dangling_edge += 1 + len(line) line = " " + CONFIG.cfg_style('type', line) first_iter = False else: dangling_edge = len(type_indent) + len(line) line = type_indent + CONFIG.cfg_style('type', line) to_write.append(line) formatter.write('\n'.join(to_write)) if dangling_edge + 1 > border + COL_SPACING: formatter.write('\n') left_col = [] else: padding = ' ' * (border + COL_SPACING - dangling_edge) formatter.write(padding) dangling_edge += len(padding) left_col = [''] # jagged start if type_placement == 'under': padding = ' ' * (border + COL_SPACING - len(type_repr) - len(type_indent)) line = ''.join( [type_indent, CONFIG.cfg_style('type', type_repr), padding]) left_col.append(line) if hasattr(opt, 'meta_help') and opt.meta_help is not None: meta_help = simple_wrap(opt.meta_help, border - len(type_indent) - 1) for idx, line in enumerate([' '.join(t) for t in meta_help]): if idx == 0: line = type_indent + '(' + line else: line = type_indent + ' ' + line if idx == len(meta_help) - 1: line += ')' line += ' ' * (border - len(line) + COL_SPACING) left_col.append(line) right_col = simple_wrap(help_text, formatter.width - border - COL_SPACING) right_col = [' '.join(self._color_important(tokens, ctx)) for tokens in right_col] to_write = [] for left, right in itertools.zip_longest( left_col, right_col, fillvalue=' ' * (border + COL_SPACING)): to_write.append(left) if right.strip(): to_write[-1] += right formatter.write('\n'.join(to_write)) if requirements is None: formatter.write('\n') else: if to_write: if len(to_write) > 1 or ((not left_col) or left_col[0] != ''): dangling_edge = 0 dangling_edge += click.formatting.term_len(to_write[-1]) else: pass # dangling_edge is still correct if dangling_edge + 1 + len(requirements) > formatter.width: formatter.write('\n') pad = formatter.width - len(requirements) else: pad = formatter.width - len(requirements) - dangling_edge formatter.write( (' ' * pad) + CONFIG.cfg_style( 'default_arg', requirements) + '\n') def _color_important(self, tokens, ctx): import re from q2cli.core.config import CONFIG for t in tokens: if '_' in t: names = self.get_option_names(ctx) if re.sub(r'[^\w]', '', t) in names: m = re.search(r'(\w+)', t) word = t[m.start():m.end()] word = CONFIG.cfg_style('emphasis', word.replace('_', '-')) token = t[:m.start()] + word + t[m.end():] yield token continue yield t def _clean_help(self, text): reqs = ['[required]', '[optional]', '[default: '] requirement = None for req in reqs: if req in text: requirement = req break else: return text, None req_idx = text.index(requirement) return text[:req_idx].strip(), text[req_idx:].strip() class ToolCommand(BaseCommandMixin, click.Command): pass class ToolGroupCommand(BaseCommandMixin, click.Group): pass def simple_wrap(text, target, start_col=0): result = [[]] current_line = result[0] current_width = start_col tokens = [] for token in text.split(' '): if len(token) <= target: tokens.append(token) else: for i in range(0, len(token), target): tokens.append(token[i:i+target]) for token in tokens: token_len = len(token) if current_width + 1 + token_len > target: current_line = [token] result.append(current_line) current_width = token_len else: result[-1].append(token) current_width += 1 + token_len return result q2cli-2024.5.0/q2cli/click/licenses/000077500000000000000000000000001462552630000166655ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/click/licenses/click.LICENSE.rst000066400000000000000000000035251462552630000215720ustar00rootroot00000000000000Copyright © 2014 by the Pallets team. Some rights reserved. Redistribution and use in source and binary forms of the software as well as documentation, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE AND DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ---- Click uses parts of optparse written by Gregory P. Ward and maintained by the Python Software Foundation. This is limited to code in parser.py. Copyright © 2001-2006 Gregory P. Ward. All rights reserved. Copyright © 2002-2006 Python Software Foundation. All rights reserved. q2cli-2024.5.0/q2cli/click/option.py000066400000000000000000000337431462552630000167540ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import click from .type import QIIME2Type # Sentinel to avoid the situation where `None` *is* the default value. NoDefault = {} class GeneratedOption(click.Option): def __init__(self, *, prefix, name, repr, ast, multiple, is_bool_flag, metadata, metavar, default=NoDefault, description=None, **attrs): import q2cli.util if metadata is not None: prefix = 'm' if multiple is not None: if multiple == 'list': multiple = list elif multiple == 'dict': multiple = dict else: multiple = set if is_bool_flag: yes = q2cli.util.to_cli_name(name) no = q2cli.util.to_cli_name('no_' + name) opt = f'--{prefix}-{yes}/--{prefix}-{no}' elif metadata is not None: cli_name = q2cli.util.to_cli_name(name) opt = f'--{prefix}-{cli_name}-file' if metadata == 'column': self.q2_extra_dest, self.q2_extra_opts, _ = \ self._parse_decls([f'--{prefix}-{cli_name}-column'], True) else: cli_name = q2cli.util.to_cli_name(name) opt = f'--{prefix}-{cli_name}' click_type = QIIME2Type(ast, repr, is_output=prefix == 'o') attrs['metavar'] = metavar attrs['multiple'] = multiple is not None attrs['param_decls'] = [opt] attrs['required'] = default is NoDefault attrs['help'] = self._add_default(description, default) if default is not NoDefault: attrs['default'] = default # This is to evade clicks __DEBUG__ check if not is_bool_flag: attrs['type'] = click_type else: attrs['type'] = None # This nonsense: # https://github.com/pallets/click/blob # /08f71b08e2b7ee9b1ea27daf6d3040999fc68551 # /src/click/core.py#L2576-L2584 if is_bool_flag and multiple is not None: to_add_multiple = attrs.pop('multiple') super().__init__(**attrs) if is_bool_flag and multiple is not None: self.multiple = to_add_multiple # put things back the way they _should_ be after evading __DEBUG__ self.is_bool_flag = is_bool_flag self.type = click_type # attrs we will use elsewhere self.q2_multiple = multiple self.q2_prefix = prefix self.q2_name = name self.q2_ast = ast self.q2_metadata = metadata @property def meta_help(self): if self.q2_metadata == 'file': return 'multiple arguments will be merged' def _add_default(self, desc, default): if desc is not None: desc += ' ' else: desc = '' if default is not NoDefault: if default is None: desc += '[optional]' else: desc += '[default: %r]' % (default,) return desc def consume_value(self, ctx, opts): if self.q2_metadata == 'column': return self._consume_metadata(ctx, opts) else: return super().consume_value(ctx, opts) def _consume_metadata(self, ctx, opts): # double consume # this consume deals with the metadata file md_file, source = super().consume_value(ctx, opts) # consume uses self.name, so mutate but backup for after backup, self.name = self.name, self.q2_extra_dest try: # this consume deals with the metadata column md_col, _ = super().consume_value(ctx, opts) # If `--m-metadata-column` isn't provided, need to set md_col to None # in order for the click.MissingParameter errors below to be raised except click.MissingParameter: md_col = None self.name = backup # These branches won't get hit unless there's a value associated with # md_col - the try/except case above handled the situation where the # metadata_column parameter itself wasn't provided (vs just a value) if (md_col is None) != (md_file is None): # missing one or the other if md_file is None: raise click.MissingParameter(ctx=ctx, param=self) else: raise click.MissingParameter(param_hint=self.q2_extra_opts, ctx=ctx, param=self) if md_col is None and md_file is None: return (None, source) else: return ((md_file, md_col), source) def get_help_record(self, ctx): record = super().get_help_record(ctx) if self.is_bool_flag: metavar = self.make_metavar() if metavar: record = (record[0] + ' ' + self.make_metavar(), record[1]) elif self.q2_metadata == 'column': opts = (record[0], self.q2_extra_opts[0] + ' COLUMN ') record = (opts, record[1]) return record # Override def add_to_parser(self, parser, ctx): shared = dict(dest=self.name, nargs=0, obj=self) if self.q2_metadata == 'column': parser.add_option(opts=self.opts, action='store', dest=self.name, nargs=1, obj=self) parser.add_option(opts=self.q2_extra_opts, action='store', dest=self.q2_extra_dest, nargs=1, obj=self) elif self.is_bool_flag: if self.multiple: action = 'append_maybe' else: action = 'store_maybe' parser.add_option(opts=self.opts, action=action, const=True, **shared) parser.add_option(opts=self.secondary_opts, action=action, const=False, **shared) elif self.multiple: action = 'append_greedy' parser.add_option(opts=self.opts, action='append_greedy', **shared) else: super().add_to_parser(parser, ctx) def get_default(self, ctx, call=True): if self.required and not ctx.resilient_parsing and not ( self.q2_prefix == 'o' and ctx.params.get('output_dir', False)): raise click.MissingParameter(ctx=ctx, param=self) return super().get_default(ctx, call=call) def process_value(self, ctx, value): try: return super().process_value(ctx, value) except click.MissingParameter: if not (self.q2_prefix == 'o' and ctx.params.get('output_dir', False)): raise def type_cast_value(self, ctx, value): import sys import q2cli.util import qiime2.sdk.util if self.multiple: if value == () or value is None: return None elif self.q2_prefix == 'i': value = super().type_cast_value(ctx, value) keys, value = self._split_and_validate_input_keys(value) if self.q2_multiple is set: self._check_length(value, ctx) # This means we loaded a proper Collection directory. When we # load in a Collection directory for an action that takes a # Collection input, we get a tuple containing a dictionary of # the Collection we wanted. When we load in a Collection # directory for an action that takes a List, we get a list # containing a dictionary of the Collection we wanted. We just # extract that dictionary. if (isinstance(value, tuple) or isinstance(value, list)) \ and len(value) == 1 and isinstance(value[0], dict): value = value[0] # We already have a dict, so we already have keys if isinstance(value, dict): keys = value.keys() value = list(value.values()) elif self.q2_multiple is dict: if keys is None: keys = range(len(value)) value = value else: value = self.q2_multiple(value) type_expr = qiime2.sdk.util.type_from_ast(self.q2_ast) args = ', '.join(map(repr, (x.type for x in value))) if value not in type_expr: raise click.BadParameter( 'received <%s> as an argument, which is incompatible' ' with parameter type: %r' % (args, type_expr), ctx=ctx, param=self) if self.q2_multiple is dict: value = {str(k): v for k, v in zip(keys, value)} return value elif self.q2_metadata == 'file': value = super().type_cast_value(ctx, value) if len(value) == 1: return value[0] else: try: return value[0].merge(*value[1:]) except Exception as e: header = ("There was an issue with merging " "QIIME 2 Metadata:") tb = 'stderr' if '--verbose' in sys.argv else None q2cli.util.exit_with_error( e, header=header, traceback=tb) elif self.q2_prefix == 'p': try: _values = [] if self.q2_multiple is set: self._check_length(value, ctx) keys = [] if self.q2_multiple is dict and type(value) is not dict: _values = {} keyed = False unkeyed = False # All params in a Collection must be either keyed or # unkeyed. We cannot have a mix because it makes things # ambiguous for idx, item in enumerate(value): if ':' in item: if unkeyed: raise KeyError( 'The keyed value <%s> has been mixed' ' with unkeyed values. All values must' ' be keyed or unkeyed.' % item) key, _value = item.split(':', 1) _values[key] = _value keyed = True else: if keyed: raise KeyError( 'The unkeyed value <%s> has been' ' mixed with keyed values. All values' ' must be keyed or unkeyed.' % item) _values[str(idx)] = item unkeyed = True else: _values = value value = \ qiime2.sdk.util.parse_primitive(self.q2_ast, _values) except ValueError: args = ', '.join(map(repr, value)) expr = qiime2.sdk.util.type_from_ast(self.q2_ast) raise click.BadParameter( 'received <%s> as an argument, which is incompatible' ' with parameter type: %r' % (args, expr), ctx=ctx, param=self) return value elif self.q2_prefix == 'i': value = super().type_cast_value(ctx, value) if value is not None: return value[1] return value # We have an output here return super().type_cast_value(ctx, value) def _split_and_validate_input_keys(self, value): """ This function ensures that if a user passed in a de-facto collection they did so properly. """ keys = [t[0] for t in value] values = [t[1] for t in value] if any(key is not None and not key.isidentifier() for key in keys): raise ValueError('All keys must be valid Python identifiers.' ' Python identifier rules may be found here' ' https://www.askpython.com/python/' 'python-identifiers-rules-best-practices') # If we had no keys, we are fine if all(key is None for key in keys): return None, values has_nones = any(key is None for key in keys) has_keys = any(key is not None for key in keys) # We cannot have keys for something that isn't a dict if self.q2_multiple is not dict and has_keys: raise ValueError('Keyed values may only be supplied for ' 'Collection inputs.') # We cannot have a mixture of keyed and unkeyed values elif self.q2_multiple is dict and has_keys and has_nones: raise ValueError('Keyed values cannot be mixed with unkeyed ' 'values.') return keys, values def _check_length(self, value, ctx): import collections if isinstance(value, tuple) and len(value) == 1 and \ isinstance(value[0], dict): value = list(value[0].values()) counter = collections.Counter(value) dups = ', '.join(map(repr, (v for v, n in counter.items() if n > 1))) args = ', '.join(map(repr, value)) if dups: raise click.BadParameter( 'received <%s> as an argument, which contains duplicates' ' of the following: <%s>' % (args, dups), ctx=ctx, param=self) q2cli-2024.5.0/q2cli/click/parser.py000066400000000000000000000150701462552630000167310ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- # ---------------------------------------------------------------------------- # Some of the source code in this file is derived from original work: # # Copyright (c) 2014 by the Pallets team. # # To see the license for the original work, see licenses/click.LICENSE.rst # Specific reproduction and derivation of original work is marked below. # ---------------------------------------------------------------------------- import click.parser as parser import click.exceptions as exceptions class Q2Option(parser.Option): @property def takes_value(self): # store_maybe should take a value so that we hit the right branch # in OptionParser._match_long_opt return (super().takes_value or self.action == 'store_maybe' or self.action == 'append_greedy') def _maybe_take(self, state): if not state.rargs: return None # In a more perfect world, we would have access to all long opts # and could verify against those instead of just the prefix '--' if state.rargs[0].startswith('--'): return None return state.rargs.pop(0) # Specific technique derived from original: # < https://github.com/pallets/click/blob/ # c6042bf2607c5be22b1efef2e42a94ffd281434c/click/core.py#L867 > # Copyright (c) 2014 by the Pallets team. def process(self, value, state): # actions should update state.opts and state.order if (self.dest in state.opts and self.action not in ('append', 'append_const', 'append_maybe', 'append_greedy', 'count')): raise exceptions.UsageError( 'Option %r was specified multiple times in the command.' % self._get_opt_name()) elif self.action == 'store_maybe': assert value == () value = self._maybe_take(state) if value is None: state.opts[self.dest] = self.const else: state.opts[self.dest] = value state.order.append(self.obj) # can't forget this elif self.action == 'append_maybe': value = self._maybe_take(state) if value is None: state.opts.setdefault(self.dest, []).append(self.const) else: while value is not None: state.opts.setdefault(self.dest, []).append(value) value = self._maybe_take(state) state.order.append(self.obj) # can't forget this elif self.action == 'append_greedy': assert value == () value = self._maybe_take(state) while value is not None: state.opts.setdefault(self.dest, []).append(value) value = self._maybe_take(state) state.order.append(self.obj) # can't forget this elif self.takes_value and value.startswith('--'): # Error early instead of cascading the parse error to a "missing" # parameter, which they ironically did provide raise parser.BadOptionUsage( self, '%s option requires an argument' % self._get_opt_name()) else: super().process(value, state) def _get_opt_name(self): if hasattr(self.obj, 'secondary_opts'): return ' / '.join(self.obj.opts + self.obj.secondary_opts) if hasattr(self.obj, 'get_error_hint'): return self.obj.get_error_hint(None) return ' / '.join(self._long_opts) class Q2Parser(parser.OptionParser): # Modified from original: # < https://github.com/pallets/click/blob/ # ic6042bf2607c5be22b1efef2e42a94ffd281434c/click/parser.py#L228 > # Copyright (c) 2014 by the Pallets team. def add_option(self, obj, opts, dest, action=None, nargs=1, const=None): """Adds a new option named `dest` to the parser. The destination is not inferred (unlike with optparse) and needs to be explicitly provided. Action can be any of ``store``, ``store_const``, ``append``, ``appnd_const`` or ``count``. The `obj` can be used to identify the option in the order list that is returned from the parser. """ opts = [parser.normalize_opt(opt, self.ctx) for opt in opts] # BEGIN MODIFICATIONS if action == 'store_maybe' or action == 'append_maybe': # Specifically target this branch: # < https://github.com/pallets/click/blob/ # c6042bf2607c5be22b1efef2e42a94ffd281434c/click/parser.py#L341 > # this happens to prevents click from reading any arguments itself # because it will only "pop" off rargs[:0], which is nothing nargs = 0 if const is None: raise ValueError("A 'const' must be provided when action is " "'store_maybe' or 'append_maybe'") elif action == 'append_greedy': nargs = 0 option = Q2Option(obj=obj, opts=opts, dest=dest, action=action, nargs=nargs, const=const) # END MODIFICATIONS self._opt_prefixes.update(option.prefixes) for opt in option._short_opts: self._short_opt[opt] = option for opt in option._long_opts: self._long_opt[opt] = option def parse_args(self, args): backup = args.copy() # args will be mutated by super() try: return super().parse_args(args) except exceptions.UsageError: if '--help' in backup: # all is forgiven return {'help': True}, [], ['help'] raise # Override of private member: # < https://github.com/pallets/click/blob/ # ic6042bf2607c5be22b1efef2e42a94ffd281434c/click/parser.py#L321 > def _match_long_opt(self, opt, explicit_value, state): if opt not in self._long_opt: from q2cli.util import get_close_matches # This is way better than substring matching possibilities = get_close_matches(opt, self._long_opt) raise exceptions.NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) return super()._match_long_opt(opt, explicit_value, state) q2cli-2024.5.0/q2cli/click/type.py000066400000000000000000000146521462552630000164230ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import click from qiime2.core.type.util import is_collection_type def is_writable_dir(path): import os head = 'do-while' path = os.path.normpath(os.path.abspath(path)) while head: if os.path.exists(path): if os.path.isfile(path): return False else: return os.access(path, os.W_OK | os.X_OK) path, head = os.path.split(path) return False class OutDirType(click.Path): def convert(self, value, param, ctx): import os # Click path fails to validate writability on new paths if os.path.exists(value): if os.path.isfile(value): self.fail('%r is already a file.' % (value,), param, ctx) else: self.fail('%r already exists, will not overwrite.' % (value,), param, ctx) if value[-1] != os.path.sep: value += os.path.sep if not is_writable_dir(value): self.fail('%r is not a writable directory, cannot write output' ' to it.' % (value,), param, ctx) return value class QIIME2Type(click.ParamType): def __init__(self, type_ast, type_repr, is_output=False): self.type_repr = type_repr self.type_ast = type_ast self.is_output = is_output self._type_expr = None @property def type_expr(self): import qiime2.sdk.util if self._type_expr is None: self._type_expr = qiime2.sdk.util.type_from_ast(self.type_ast) return self._type_expr def convert(self, value, param, ctx): import qiime2.sdk.util if value is None: return None # Them's the rules if self.is_output: return self._convert_output(value, param, ctx) if qiime2.sdk.util.is_semantic_type(self.type_expr): return self._convert_input(value, param, ctx) if qiime2.sdk.util.is_metadata_type(self.type_expr): return self._convert_metadata(value, param, ctx) return self._convert_primitive(value, param, ctx) def _convert_output(self, value, param, ctx): import os from q2cli.util import output_in_cache # Click path fails to validate writability on new paths # Check if our output path is actually in a cache and if it is skip our # other checks if output_in_cache(value): return value if os.path.exists(value): if os.path.isdir(value): self.fail('%r is already a directory.' % (value,), param, ctx) directory = os.path.dirname(value) if (directory and not os.path.exists(directory) and not is_collection_type(param.type.type_expr)): self.fail('Directory %r does not exist, cannot save %r into it.' % (directory, os.path.basename(value)), param, ctx) if not is_writable_dir(directory): self.fail('%r is not a writable directory, cannot write output' ' to it.' % (directory,), param, ctx) return value def _convert_input(self, value, param, ctx): import os import qiime2.sdk import qiime2.sdk.util import q2cli.util try: result, error = q2cli.util._load_input(value) if result is not None: result_value = result[1] except Exception as e: header = f'There was a problem loading {value!r} as an artifact:' q2cli.util.exit_with_error( e, header=header, traceback='stderr') if error: self.fail(str(error), param, ctx) # We want to use click's fail to pretty print whatever error we got # from get_input if isinstance(result_value, qiime2.sdk.Visualization): maybe = value[:-1] + 'a' hint = '' if os.path.exists(maybe): hint = (' (There is an artifact with the same name:' ' %r, did you mean that?)' % os.path.basename(maybe)) self.fail('%r is a QIIME 2 visualization (.qzv), not an' ' Artifact (.qza)%s' % (value, hint), param, ctx) style = qiime2.sdk.util.interrogate_collection_type(self.type_expr) if style.style is None and result_value not in self.type_expr: # collections need to be handled above this self.fail("Expected an artifact of at least type %r." " An artifact of type %r was provided." % (self.type_expr, result_value.type), param, ctx) return result def _convert_metadata(self, value, param, ctx): import q2cli.util if self.type_expr.name == 'MetadataColumn': value, column = value metadata = q2cli.util.load_metadata(value) if self.type_expr.name != 'MetadataColumn': return metadata else: try: metadata_column = metadata.get_column(column) except Exception: self.fail("There was an issue with retrieving column %r from " "the metadata." % column) if metadata_column not in self.type_expr: self.fail("Metadata column is of type %r, but expected %r." % (metadata_column.type, self.type_expr.fields[0])) return metadata_column def _convert_primitive(self, value, param, ctx): import qiime2.sdk.util try: return qiime2.sdk.util.parse_primitive(self.type_expr, value) except ValueError: expr = qiime2.sdk.util.type_from_ast(self.type_ast) raise click.BadParameter( 'received <%s> as an argument, which is incompatible' ' with parameter type: %r' % (value, expr), ctx=ctx) @property def name(self): return self.get_metavar('') def get_type_repr(self, param): return self.type_repr def get_missing_message(self, param): if self.is_output: return '("--output-dir" may also be used)' q2cli-2024.5.0/q2cli/commands.py000066400000000000000000000613101462552630000161470ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import click import q2cli.builtin.dev import q2cli.builtin.info import q2cli.builtin.tools from q2cli.click.command import BaseCommandMixin from q2cli.core.config import CONFIG class RootCommand(BaseCommandMixin, click.MultiCommand): """This class defers to either the PluginCommand or the builtin cmds""" _builtin_commands = { 'info': q2cli.builtin.info.info, 'tools': q2cli.builtin.tools.tools, 'dev': q2cli.builtin.dev.dev } def __init__(self, *args, **kwargs): import re import sys unicodes = ["\u2018", "\u2019", "\u201C", "\u201D", "\u2014", "\u2013"] category_regex = re.compile(r'--m-(\S+)-category') invalid_chars = [] categories = [] for command in sys.argv: if any(x in command for x in unicodes): invalid_chars.append(command) match = category_regex.fullmatch(command) if match is not None: param_name, = match.groups() # Maps old-style option name to new name. categories.append((command, '--m-%s-column' % param_name)) if invalid_chars or categories: if invalid_chars: msg = ("Error: Detected invalid character in: %s\nVerify the " "correct quotes or dashes (ASCII) are being used." % ', '.join(invalid_chars)) click.echo(CONFIG.cfg_style('error', msg), err=True) if categories: old_to_new_names = '\n'.join( 'Instead of %s, trying using %s' % (old, new) for old, new in categories) msg = ("Error: The following options no longer exist because " "metadata *categories* are now called metadata " "*columns* in QIIME 2.\n\n%s" % old_to_new_names) click.echo(CONFIG.cfg_style('error', msg), err=True) sys.exit(-1) super().__init__(*args, **kwargs) # Plugin state for current deployment that will be loaded from cache. # Used to construct the dynamic CLI. self._plugins = None @property def _plugin_lookup(self): import q2cli.util # See note in `q2cli.completion.write_bash_completion_script` for why # `self._plugins` will not always be obtained from # `q2cli.cache.CACHE.plugins`. if self._plugins is None: import q2cli.core.cache self._plugins = q2cli.core.cache.CACHE.plugins name_map = {} for name, plugin in self._plugins.items(): if plugin['actions']: name_map[q2cli.util.to_cli_name(name)] = plugin return name_map def list_commands(self, ctx): import itertools # Avoid sorting builtin commands as they have a predefined order based # on applicability to users. For example, it isn't desirable to have # the `dev` command listed before `info` and `tools`. builtins = self._builtin_commands plugins = sorted(self._plugin_lookup) return itertools.chain(builtins, plugins) def get_command(self, ctx, name): if name in self._builtin_commands: return self._builtin_commands[name] try: plugin = self._plugin_lookup[name] except KeyError: from q2cli.util import get_close_matches possibilities = get_close_matches(name, self._plugin_lookup) if len(possibilities) == 1: hint = ' Did you mean %r?' % possibilities[0] elif possibilities: hint = ' (Possible commands: %s)' % ', '.join(possibilities) else: hint = '' click.echo( CONFIG.cfg_style('error', "Error: QIIME 2 has no " "plugin/command named %r." % name + hint), err=True) ctx.exit(2) # Match exit code of `return None` return PluginCommand(plugin, name) class PluginCommand(BaseCommandMixin, click.MultiCommand): """Provides ActionCommands based on available Actions""" def __init__(self, plugin, name, *args, **kwargs): import q2cli.util # the cli currently doesn't differentiate between methods # and visualizers, it treats them generically as Actions self._plugin = plugin self._action_lookup = {} self._hidden_actions = {} # Hide actions that start with _ by default for id, a in plugin['actions'].items(): if id.startswith('_'): self._hidden_actions[q2cli.util.hidden_to_cli_name(id)] = a else: self._action_lookup[q2cli.util.to_cli_name(id)] = a support = 'Getting user support: %s' % plugin['user_support_text'] website = 'Plugin website: %s' % plugin['website'] description = 'Description: %s' % plugin['description'] help_ = '\n\n'.join([description, website, support]) params = [ click.Option(('--version',), is_flag=True, expose_value=False, is_eager=True, callback=self._get_version, help='Show the version and exit.'), q2cli.util.example_data_option(self._get_plugin), q2cli.util.citations_option(self._get_citation_records) ] if self._hidden_actions != {}: params.append( click.Option( ('--show-hidden-actions',), is_flag=True, expose_value=False, is_eager=True, callback=self._get_hidden_actions, help="This plugin has hidden actions with names starting " "with '_'. These are generally called internally by " "pipelines. Passing this flag will display those " "actions.")) super().__init__(name, *args, short_help=plugin['short_description'], help=help_, params=params, **kwargs) def _get_version(self, ctx, param, value): if not value or ctx.resilient_parsing: return import q2cli.util pm = q2cli.util.get_plugin_manager() for plugin in pm.plugins.values(): if (self._plugin['name'] == plugin.name): pkg_name = plugin.project_name pkg_version = plugin.version break else: pkg_name = pkg_version = "[UNKNOWN]" click.echo( "QIIME 2 Plugin '%s' version %s (from package '%s' version %s)" % (self._plugin['name'], self._plugin['version'], pkg_name, pkg_version) ) ctx.exit() def _get_citation_records(self): import q2cli.util pm = q2cli.util.get_plugin_manager() return pm.plugins[self._plugin['name']].citations def _get_hidden_actions(self, ctx, param, value): """Add actions that start with _ back to the lookup""" # Click calls this whether the flag was provided or not. If the flag # was not provided value is, unsurprisingly, False. We do not want to # execute this if the flag was not provided. # # Resilient parsing has something to do with ignoring default values # which for this is False. Honestly not 100% sure why we need that # here, but it is in the check in _get_version, and I a mimicking that if not value or ctx.resilient_parsing: return from click.utils import echo self._action_lookup.update(self._hidden_actions) # Handle the printing and exiting here. This feels like a pretty # serious misuse of click, but it probably isn't the most egregious in # the cli echo(ctx.get_help(), color=ctx.color) ctx.exit() def _get_plugin(self): import q2cli.util pm = q2cli.util.get_plugin_manager() return pm.plugins[self._plugin['name']] def list_commands(self, ctx): return sorted(self._action_lookup) def get_command(self, ctx, name): try: # Hidden actions are still valid commands self._action_lookup.update(self._hidden_actions) action = self._action_lookup[name] except KeyError: from q2cli.util import get_close_matches possibilities = get_close_matches(name, self._action_lookup) if len(possibilities) == 1: hint = ' Did you mean %r?' % possibilities[0] elif possibilities: hint = ' (Possible commands: %s)' % ', '.join(possibilities) else: hint = '' click.echo( CONFIG.cfg_style('error', "Error: QIIME 2 plugin %r has no " "action %r." % (self._plugin['name'], name) + hint), err=True) ctx.exit(2) # Match exit code of `return None` return ActionCommand(name, self._plugin, action) class ActionCommand(BaseCommandMixin, click.Command): """A click manifestation of a QIIME 2 API Action (Method/Visualizer) """ def __init__(self, name, plugin, action): import q2cli.util import q2cli.click.type self.plugin = plugin self.action = action self._inputs, self._params, self._outputs = \ self._build_generated_options() self._misc = [ click.Option(['--output-dir'], type=q2cli.click.type.OutDirType(), help='Output unspecified results to a directory'), click.Option(['--verbose / --quiet'], default=None, required=False, help='Display verbose output to stdout and/or stderr ' 'during execution of this action. Or silence ' 'output if execution is successful (silence is ' 'golden).') ] # If this action is a pipeline it needs additional options for # recycling and parallelization action_obj = self._get_action() if action_obj.type == 'pipeline': self._misc.extend([ click.Option(['--recycle-pool'], required=False, type=str, help='Use a cache pool for pipeline resumption. ' 'QIIME 2 will cache your results in this ' 'pool for reuse by future invocations. ' 'These pool are retained until deleted by ' 'the user. If not provided, QIIME 2 will ' 'create a pool which is automatically ' 'reused by invocations of the same action ' 'and removed if the action is successful. ' 'Note: these pools are local to the ' 'cache you are using.'), click.Option(['--no-recycle'], is_flag=True, required=False, help='Do not recycle results from a previous ' 'failed pipeline run or save the results ' 'from this run for future recycling.'), click.Option(['--parallel'], is_flag=True, required=False, help='Execute your action in parallel. This flag ' 'will use your default parallel config.'), click.Option(['--parallel-config'], required=False, type=click.Path(exists=True, dir_okay=False), help='Execute your action in parallel using a ' 'config at the indicated path.'), click.Option(['--use-cache'], required=False, type=click.Path(exists=True, file_okay=False), help='Specify the cache to be used for the ' 'intermediate work of this pipeline. If ' 'not provided, the default cache under ' '$TMP/qiime2/ will be used. ' 'IMPORTANT FOR HPC USERS: If you are on an ' 'HPC system and are using parallel ' 'execution it is important to set this to ' 'a location that is globally accessible to ' 'all nodes in the cluster.')]) self._misc.extend([ q2cli.util.example_data_option( self._get_plugin, self.action['id']), q2cli.util.citations_option(self._get_citation_records)]) options = [*self._inputs, *self._params, *self._outputs, *self._misc] help_ = [action['description']] if self.action['deprecated']: help_.append(CONFIG.cfg_style( 'warning', 'WARNING:\n\nThis command is deprecated and will ' 'be removed in a future version of this plugin.')) super().__init__(name, params=options, callback=self, short_help=action['name'], help='\n\n'.join(help_)) def _build_generated_options(self): import q2cli.click.option inputs = [] params = [] outputs = [] for item in self.action['signature']: item = item.copy() type = item.pop('type') if type == 'input': storage = inputs elif type == 'parameter': storage = params else: storage = outputs opt = q2cli.click.option.GeneratedOption(prefix=type[0], **item) storage.append(opt) return inputs, params, outputs def get_opt_groups(self, ctx): return { 'Inputs': self._inputs, 'Parameters': self._params, 'Outputs': self._outputs, 'Miscellaneous': self._misc + [self.get_help_option(ctx)] } def _get_citation_records(self): return self._get_action().citations def _get_plugin(self): import q2cli.util pm = q2cli.util.get_plugin_manager() return pm.plugins[self.plugin['name']] def _get_action(self): plugin = self._get_plugin() return plugin.actions[self.action['id']] def __call__(self, **kwargs): """Called when user hits return, **kwargs are Dict[click_names, Obj]""" import os import click import qiime2.util from q2cli.util import (output_in_cache, _get_cache_path_and_key, get_default_recycle_pool) from qiime2.core.cache import Cache from qiime2.sdk import ResultCollection output_dir = kwargs.pop('output_dir') # If they gave us a cache and key combo as an output dir, we want to # error out, so we check if their output dir contains a : and the part # before it is a cache if output_dir: potential_cache = output_dir.rsplit(':', 1)[0] if potential_cache and os.path.exists(potential_cache) and \ Cache.is_cache(potential_cache): raise ValueError(f"The given output dir '{output_dir}' " "appears to be a cache:key combo. Cache keys " "cannot be used as output dirs.") # Args pertaining to pipeline resumption recycle_pool = kwargs.pop('recycle_pool', None) no_recycle = kwargs.pop('no_recycle', False) if recycle_pool is not None and no_recycle: raise ValueError('Cannot set a pool to be used for recycling and ' 'no recycle simultaneously.') used_cache = kwargs.pop('use_cache', None) if used_cache is not None and not Cache.is_cache(used_cache): raise ValueError(f"The path '{used_cache}' is not a valid cache, " "please supply a path to a valid pre-existing " "cache.") parallel = kwargs.pop('parallel', False) parallel_config_fp = kwargs.pop('parallel_config', None) if parallel_config_fp is not None: parallel = True verbose = kwargs.pop('verbose') if verbose is None: verbose = False quiet = False elif verbose: quiet = False else: quiet = True cache = Cache(path=used_cache) arguments = {} init_outputs = {} for key, value in kwargs.items(): prefix, *parts = key.split('_') key = '_'.join(parts) if prefix == 'o': if value is None: value = os.path.join(output_dir, key) init_outputs[key] = value elif prefix == 'm': arguments[key[:-len('_file')]] = value # Make sure our inputs are backed by the cache we are using. This # is necessary for HPCs where our input .qzas may be in a location # that is not globally accessible to the cluster. The user should # be using a cache that is in a globally accessible location. We # need to ensure we put our artifacts in that cache. elif prefix == 'i' and used_cache is not None: value_ = value if isinstance(value, list): value_ = [cache.process_pool.save(v) for v in value] elif isinstance(value, dict) or \ isinstance(value, ResultCollection): value_ = { k: cache.process_pool.save(v) for k, v in value.items()} elif isinstance(value, set): value_ = set([cache.process_pool.save(v) for v in value]) elif value is not None: value_ = cache.process_pool.save(value) arguments[key] = value_ else: arguments[key] = value outputs = self._order_outputs(init_outputs) action = self._get_action() # If --no-recycle is not set, pipelines attempt to recycle their # outputs from a pool by default allowing recovery of failed pipelines # from point of failure without needing to restart the pipeline from # the beginning default_pool = get_default_recycle_pool( f'{action.plugin_id}_{action.id}') if not no_recycle and action.type == 'pipeline' and \ recycle_pool is None: # We implicitly use a pool named # recycle___ if no pool is # provided recycle_pool = default_pool if recycle_pool is not None and recycle_pool != default_pool and \ recycle_pool not in cache.get_pools(): msg = ("The pool '%s' does not exist on the cache at '%s'. It " "will be created." % (recycle_pool, cache.path)) click.echo(CONFIG.cfg_style('warning', msg)) # `qiime2.util.redirected_stdio` defaults to stdout/stderr when # supplied `None`. log = None if not verbose: import tempfile log = tempfile.NamedTemporaryFile(prefix='qiime2-q2cli-err-', suffix='.log', delete=False, mode='w') if action.deprecated: # We don't need to worry about redirecting this, since it should a) # always be shown to the user and b) the framework-originated # FutureWarning will wind up in the log file in quiet mode. msg = ('Plugin warning from %s:\n\n%s is deprecated and ' 'will be removed in a future version of this plugin.' % (q2cli.util.to_cli_name(self.plugin['name']), self.name)) click.echo(CONFIG.cfg_style('warning', msg)) cleanup_logfile = False try: with qiime2.util.redirected_stdio(stdout=log, stderr=log): if parallel: from qiime2.sdk.parallel_config import \ (get_config_from_file, ParallelConfig) action = action.parallel if parallel_config_fp is None: parallel_config = ParallelConfig() else: config, mapping = \ get_config_from_file(parallel_config_fp) parallel_config = ParallelConfig(config, mapping) with parallel_config: results = self._execute_action( action, arguments, cache, recycle_pool) else: results = self._execute_action( action, arguments, cache, recycle_pool) except Exception as e: header = ('Plugin error from %s:' % q2cli.util.to_cli_name(self.plugin['name'])) if verbose: # log is not a file log = 'stderr' q2cli.util.exit_with_error(e, header=header, traceback=log) else: cleanup_logfile = True finally: # OS X will reap temporary files that haven't been touched in # 36 hours, double check that the log is still on the filesystem # before trying to delete. Otherwise this will fail and the # output won't be written. if log and cleanup_logfile and os.path.exists(log.name): log.close() os.remove(log.name) if output_dir is not None: os.makedirs(output_dir) for result, output in zip(results, outputs): if isinstance(output, tuple) and len(output) == 1: output = output[0] if output_in_cache(output) and output_dir is None: cache_path, key = _get_cache_path_and_key(output) output_cache = Cache(cache_path) if isinstance(result, ResultCollection): output_cache.save_collection(result, key) path = output else: output_cache.save(result, key) path = output else: path = result.save(output) if not quiet: if output_in_cache(output): message = \ f"Added {result.type} to cache: {cache_path} as: {key}" else: type = f'Collection[{list(result.values())[0].type}]' if \ isinstance(result, ResultCollection) else result.type message = f"Saved {type} to: {path}" click.echo(CONFIG.cfg_style('success', message)) # If we used a default recycle pool for a pipeline and the pipeline # succeeded, then we need to clean up the pool. Make sure to do this at # the very end so if a failure happens during writing results we still # have them if recycle_pool == default_pool: cache.remove(recycle_pool) def _execute_action(self, action, arguments, cache, recycle_pool=None): with cache: if recycle_pool is None: results = action(**arguments) results = results._result() else: pool = cache.create_pool(key=recycle_pool, reuse=True) with pool: results = action(**arguments) # If we executed in a pool using parsl we need to get # our results inside of the context manager to ensure # that the pool is set for the entirety of the # execution results = results._result() return results def _order_outputs(self, outputs): ordered = [] for item in self.action['signature']: if item['type'] == 'output': ordered.append(outputs[item['name']]) return ordered def format_epilog(self, ctx, formatter): if self.action['epilog']: with formatter.section(click.style('Examples', bold=True)): for line in self.action['epilog']: formatter.write(' ' * formatter.current_indent) formatter.write(line) formatter.write('\n') q2cli-2024.5.0/q2cli/core/000077500000000000000000000000001462552630000147235ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/core/__init__.py000066400000000000000000000005351462552630000170370ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- q2cli-2024.5.0/q2cli/core/assets/000077500000000000000000000000001462552630000162255ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/core/assets/cli_howto.txt000066400000000000000000000034271462552630000207630ustar00rootroot00000000000000 # Instructions for use: # 1. Open this script in a text editor or IDE. Support for BASH # syntax highlighting can be helpful. # 2. Search or scan visually for '<' or '>' characters to find places where # user input (e.g. a filepath or column name) is required. These must be # replaced with your own values. E.g. -> 'patient_id'. # Failure to remove '<' or '>' may result in `No such File ...` errors # 3. Search for 'FIXME' comments in the script, and respond as directed. # 4. Remove all 'FIXME' comments from the script completely. Failure to do so # may result in 'Missing Option' errors # 5. Adjust the arguments to the commands below to suit your data and metadata. # If your data is not identical to that in the replayed analysis, # changes may be required. (e.g. sample ids or rarefaction depth) # 6. Optional: replace any filenames in this script that begin with 'XX' with # unique file names to ensure they are preserved. QIIME 2 saves all outputs # from all actions in this script to disk regardless of whether those # outputs were in the original collection of replayed results. The filenames # of "un-replayed" artifacts are prefixed with 'XX' so they may be easily # located. These names are not guaranteed to be unique, so 'XX_table.qza' # may be overwritten by another 'XX_table.qza' later in the script. # 7. Activate your replay conda environment, and confirm you have installed all # plugins used by the script. # 8. Run this script with `bash `, or copy-paste commands # into the terminal for a more interactive analysis. # 9. Optional: to delete all results not required to produce the figures and # data used to generate this script, navigate to the directory in which you # ran the script and `rm XX*.qz*`q2cli-2024.5.0/q2cli/core/assets/copyright_note.txt000066400000000000000000000004661462552630000220310ustar00rootroot00000000000000# This document is a representation of the scholarly work of the creator of the # QIIME 2 Results provided as input to this software, and may be protected by # intellectual property law. Please respect all copyright restrictions and # licenses governing the use, modification, and redistribution of this work. q2cli-2024.5.0/q2cli/core/cache.py000066400000000000000000000257021462552630000163460ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- class DeploymentCache: """Cached CLI state for a QIIME deployment. In this context, a QIIME deployment is the set of installed Python packages, including their exact versions, that register one or more QIIME 2 plugins. The exact version of q2cli is also included in the deployment. The deployment cache stores the current deployment's package names and versions in a requirements.txt file under the cache directory. This file is used to determine if the cache is outdated. If the cache is determined to be outdated, it will be refreshed based on the current deployment state. Thus, adding, removing, upgrading, or downgrading a plugin package or q2cli itself will trigger a cache refresh. Two mechanisms are provided to force a cache refresh. Setting the environment variable Q2CLIDEV to any value will cause the cache to be refreshed upon instantiation. Calling `.refresh()` will also refresh the cache. Forced refreshing of the cache is useful for plugin and/or q2cli developers who want their changes to take effect in the CLI without changing their package versions. Cached CLI state is stored in a state.json file under the cache directory. It is not a public file format and it is not versioned. q2cli is included as part of the QIIME deployment so that the cached state can always be read (or recreated as necessary) by the currently installed version of q2cli. This class is intended to be a singleton because it is responsible for managing the on-disk cache. Having more than one instance managing the cache has the possibility of two instances clobbering the cache (e.g. in a multithreaded/multiprocessing situation). Also, having a single instance improves performance by only reading and/or refreshing the cache a single time during its lifetime. Having two instances could, for example, trigger two cache refreshes if Q2CLIDEV is set. To support these use-cases, a module-level `CACHE` variable stores a single instance of this class. """ # Public API def __init__(self): import os # Indicates if the cache has been refreshed. For performance purposes, # the cache is only refreshed a single time (at maximum) during the # object's lifetime. Thus, "hot reloading" isn't supported, but this # shouldn't be necessary for the CLI. self._refreshed = False self._cache_dir = self._get_cache_dir() refresh = 'Q2CLIDEV' in os.environ self._state = self._get_cached_state(refresh=refresh) @property def plugins(self): """Decoded JSON object representing CLI state on a per-plugin basis.""" return self._state['plugins'] def refresh(self): """Trigger a forced refresh of the cache. If the cache has already been refreshed (either by this method or at some point during instantiation), this method is a no-op. """ if not self._refreshed: self._state = self._get_cached_state(refresh=True) # Private API def _get_cache_dir(self): import os import q2cli.util cache_dir = q2cli.util.get_cache_dir() os.makedirs(cache_dir, exist_ok=True) return cache_dir def _get_cached_state(self, refresh): import json import os.path import q2cli.util current_requirements = self._get_current_requirements() state_path = os.path.join(self._cache_dir, 'state.json') # See note on `get_completion_path` for why knowledge of this path # exists in `q2cli.util` and not in this class. completion_path = q2cli.util.get_completion_path() # The cache must be refreshed in the following cases: # 1) We have been explicitly told to refresh. if refresh: self._cache_current_state(current_requirements) # 2) The current deployment requirements are different than the cached # requirements. elif current_requirements != self._get_cached_requirements(): self._cache_current_state(current_requirements) # 3) The cached state file does not exist. elif not os.path.exists(state_path): self._cache_current_state(current_requirements) # 4) The cached bash completion script does not exist. elif not os.path.exists(completion_path): self._cache_current_state(current_requirements) def decoder(obj): if obj.get('__q2type__', None) == 'set': return set(obj['value']) return obj # Now that the cache is up-to-date, read it. try: with open(state_path, 'r') as fh: return json.load(fh, object_hook=decoder) except json.JSONDecodeError: # 5) The cached state file can't be read as JSON. self._cache_current_state(current_requirements) with open(state_path, 'r') as fh: return json.load(fh, object_hook=decoder) # NOTE: The private methods below are all used internally within # `_get_cached_state`. def _get_current_requirements(self): """Includes installed versions of q2cli and QIIME 2 plugins.""" import os import pkg_resources import q2cli reqs = { pkg_resources.Requirement.parse('q2cli == %s' % q2cli.__version__) } # A distribution (i.e. Python package) can have multiple plugins, where # each plugin is its own entry point. A distribution's `Requirement` is # hashable, and the `set` is used to exclude duplicates. Thus, we only # gather the set of requirements for all installed Python packages # containing one or more plugins. It is not necessary to track # individual plugin names and versions in order to determine if the # cache is outdated. # # TODO: this code is (more or less) copied from # `qiime2.sdk.PluginManager.iter_entry_points`. Importing QIIME is # currently slow, and it adds ~600-700ms to any CLI command. This makes # the CLI pretty unresponsive, especially when running help/informative # commands. Replace with the following lines when # https://github.com/qiime2/qiime2/issues/151 is fixed: # # for ep in qiime2.sdk.PluginManager.iter_entry_points(): # reqs.add(ep.dist.as_requirement()) # for entry_point in pkg_resources.iter_entry_points( group='qiime2.plugins'): if 'QIIMETEST' in os.environ: if entry_point.name in ('dummy-plugin', 'other-plugin'): reqs.add(entry_point.dist.as_requirement()) else: if entry_point.name not in ('dummy-plugin', 'other-plugin'): reqs.add(entry_point.dist.as_requirement()) return reqs def _get_cached_requirements(self): import os.path import pkg_resources path = os.path.join(self._cache_dir, 'requirements.txt') if not os.path.exists(path): # No cached requirements. The empty set will always trigger a cache # refresh because the current requirements will, at minimum, # contain q2cli. return set() else: with open(path, 'r') as fh: contents = fh.read() try: return set(pkg_resources.parse_requirements(contents)) except pkg_resources.RequirementParseError: # Unreadable cached requirements, trigger a cache refresh. return set() def _cache_current_state(self, requirements): import json import os.path import click import q2cli.core.completion import q2cli.util click.secho( "QIIME is caching your current deployment for improved " "performance. This may take a few moments and should only happen " "once per deployment.", fg='yellow', err=True) cache_dir = self._cache_dir state = self._get_current_state() path = os.path.join(cache_dir, 'state.json') class Q2JSONEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, set): return { '__q2type__': 'set', 'value': list(obj), } return super().default(obj) with open(path, 'w') as fh: json.dump(state, fh, cls=Q2JSONEncoder) q2cli.core.completion.write_bash_completion_script( state['plugins'], q2cli.util.get_completion_path()) # Write requirements file last because the above steps may raise errors # (e.g. a plugin can't be loaded in `_get_current_state`). If any part # of the cache writing fails, it needs to be refreshed the next time # the cache is accessed. The absence of a requirements file will # trigger this cache refresh, avoiding this bug: # https://github.com/qiime2/q2cli/issues/88 path = os.path.join(cache_dir, 'requirements.txt') with open(path, 'w') as fh: for req in requirements: # `str(Requirement)` is the recommended way to format a # `Requirement` that can be read with `Requirement.parse`. fh.write(str(req)) fh.write('\n') self._refreshed = True def _get_current_state(self): """Get current CLI state as an object that is serializable as JSON. WARNING: This method is very slow and should only be called when the cache needs to be refreshed. """ import q2cli.util state = { 'plugins': {} } plugin_manager = q2cli.util.get_plugin_manager() for name, plugin in plugin_manager.plugins.items(): state['plugins'][name] = self._get_plugin_state(plugin) return state def _get_plugin_state(self, plugin): import q2cli.core.state state = q2cli.core.state.get_plugin_state(plugin) for id, action in plugin.actions.items(): state['actions'][id]['epilog'] = self._get_action_epilog(action) return state def _get_action_epilog(self, action): import q2cli.core.usage lines = [] for name, example in action.examples.items(): use = q2cli.core.usage.CLIUsage() use.comment('### example: %s\n' % (name.replace('_', ' '),)) example(use) use.recorder.append('') lines += use.recorder return lines # Singleton. Import and use this instance as necessary. CACHE = DeploymentCache() q2cli-2024.5.0/q2cli/core/completion.py000066400000000000000000000123141462552630000174470ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- # NOTE: This module is closely coupled with `q2cli.cache`. It is a separate # module to avoid cluttering `q2cli.cache` with completion script code, and # provides a place to support other shell completion scripts in the future # (e.g. zsh). def write_bash_completion_script(plugins, path): """ Parameters ---------- plugins : dict Decoded JSON object representing CLI state on a per-plugin basis (e.g. as returned by `DeploymentCache.plugins`). See note within this function for why this parameter is necessary. path : str Path to write completion script to. """ import os import os.path import stat import textwrap from q2cli.__main__ import qiime as root # `write_bash_completion_script` is called by `q2cli.cache.DeploymentCache` # when it is refreshing its cache. `q2cli.commands.RootCommand` could have # already asked for the cache, for example, if the user ran a command and # the cache must be refreshed. The bash completion script is generated by # traversing the `RootCommand` tree, so there is a cycle when `RootCommand` # attempts to access the cache in order to build itself. We work around # this by bootstrapping the `RootCommand`'s `._plugins` attribute with the # plugin state that has already been loaded by `DeploymentCache`. root._plugins = plugins cmd_reply = _generate_command_reply(root) cmd_reply = textwrap.indent(cmd_reply, ' ') completion_script = COMPLETION_SCRIPT_TEMPLATE.format(cmd_reply=cmd_reply) with open(path, 'w') as fh: fh.write(completion_script) # Make bash completion script executable: # http://stackoverflow.com/a/12792002/3776794 st = os.stat(path) # Set executable bit for user,group,other for root/sudo installs os.chmod(path, st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH) def _generate_command_reply(cmd): """Recursively generate completion reply for this command and subcommands. Parameters ---------- cmd : click.Command Command to generate completion replies for (including its subcommands). """ import textwrap import click ctx = None options = ['--help'] for param in cmd.params: if isinstance(param, click.Option): options.extend(param.opts) options.extend(param.secondary_opts) if hasattr(param, 'q2_extra_opts'): options.extend(param.q2_extra_opts) subcmd_names = [] if isinstance(cmd, click.MultiCommand): subcmd_names.extend(cmd.list_commands(ctx)) subcmd_cases = [] for subcmd_name in subcmd_names: subcmd_reply = _generate_command_reply( cmd.get_command(ctx, subcmd_name)) subcmd_reply = textwrap.indent(subcmd_reply, ' ') case = SUBCOMMAND_CASE_TEMPLATE.format( subcmd_name=subcmd_name, subcmd_reply=subcmd_reply) subcmd_cases.append(case) subcmd_cases = textwrap.indent('\n'.join(subcmd_cases), ' ' * 6) cmd_reply = COMMAND_REPLY_TEMPLATE.format( options=' '.join(options), subcmd_names=' '.join(subcmd_names), subcmd_cases=subcmd_cases) return cmd_reply # NOTE: using double braces to avoid `str.format` interpolation when bash needs # curly braces in the generated code. # # NOTE: the handling of a negative COMP_CWORD is necessary in certain versions # of bash (e.g. at least the bash shipped with OS X 10.9.5). When adding # whitespace to the end of a command, and then moving the cursor backwards in # the command and hitting , COMP_CWORD can be negative (I've only seen -2 # as its value). This is a bash bug and is not documented behavior. Other CLIs # with tab completion suffer from the same issue, and each one deals with this # bug differently (some not at all, e.g. `git`). The workaround used below # seems to provide the least destructive completion behavior for our CLI. # # Bug report reference: # https://lists.gnu.org/archive/html/bug-bash/2009-07/msg00108.html COMPLETION_SCRIPT_TEMPLATE = """\ #!/usr/bin/env bash _qiime_completion() {{ local COMP_WORDS=(${{COMP_WORDS[*]}}) local incomplete if [[ ${{COMP_CWORD}} -lt 0 ]] ; then COMP_CWORD="${{#COMP_WORDS[*]}}" incomplete="" else incomplete="${{COMP_WORDS[COMP_CWORD]}}" fi local curpos nextpos nextword nextpos=0 {cmd_reply} return 0 }} _qiime_completion """ COMMAND_REPLY_TEMPLATE = """\ curpos=${{nextpos}} while : do nextpos=$((curpos + 1)) nextword="${{COMP_WORDS[nextpos]}}" if [[ ${{nextpos}} -eq ${{COMP_CWORD}} ]] ; then if [[ ${{incomplete}} == -* ]] ; then echo "$(compgen -W "{options}" -- $incomplete)" else echo "$(compgen -W "{subcmd_names}" -- $incomplete)" fi return 0 else case "${{nextword}}" in {subcmd_cases} esac curpos=${{nextpos}} fi done\ """ SUBCOMMAND_CASE_TEMPLATE = """\ {subcmd_name}) {subcmd_reply} ;; """ q2cli-2024.5.0/q2cli/core/config.py000066400000000000000000000120301462552630000165360ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os import configparser import click import q2cli.util class CLIConfig(): path = os.path.join(q2cli.util.get_app_dir(), 'cli-colors.theme') VALID_SELECTORS = frozenset( ['option', 'type', 'default_arg', 'command', 'emphasis', 'problem', 'warning', 'error', 'required', 'success']) VALID_STYLINGS = frozenset( ['fg', 'bg', 'bold', 'dim', 'underline', 'blink', 'reverse']) VALID_COLORS = frozenset( ['black', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'white', 'bright_black', 'bright_red', 'bright_green', 'bright_yellow', 'bright_blue', 'bright_magenta', 'bright_cyan', 'bright_white']) VALID_BOOLEANS = {'true': True, 'false': False, 't': True, 'f': False} def __init__(self): if os.path.exists(self.path): self.styles = self.get_editable_styles() try: self.parse_file(self.path) except Exception as e: # Let's just be safe and make no attempt to use CONFIG to # format this text if the CONFIG is broken click.secho( "We encountered the following error when parsing your " f"theme:\n\n{str(e)}\n\nIf you want to use a custom " "theme, please either import a new theme, or reset your " "current theme. If you encountered this message while " "importing a new theme or resetting your current theme, " "ignore it.", fg='red') self.styles = self.get_default_styles() else: self.styles = self.get_default_styles() def get_default_styles(self): return {'option': {'fg': 'bright_blue'}, 'type': {'fg': 'green'}, 'default_arg': {'fg': 'magenta'}, 'command': {'fg': 'bright_blue'}, 'emphasis': {'underline': True}, 'problem': {'fg': 'yellow'}, 'warning': {'fg': 'yellow', 'bold': True}, 'error': {'fg': 'red', 'bold': True}, 'required': {'underline': True}, 'success': {'fg': 'green'}} # This maintains the default colors while getting rid of all the default # styling modifiers so what the user puts in their file is all they'll see def get_editable_styles(self): return {'option': {}, 'type': {}, 'default_arg': {}, 'command': {}, 'emphasis': {}, 'problem': {}, 'warning': {}, 'error': {}, 'required': {}, 'success': {}} def _build_error(self, current, valid_list, valid_string): valids = ', '.join(valid_list) raise configparser.Error(f'{current!r} is not a {valid_string}. The ' f'{valid_string}s are:\n{valids}') def parse_file(self, fp): if os.path.exists(fp): parser = configparser.ConfigParser() parser.read(fp) for selector_user in parser.sections(): selector = selector_user.lower() if selector not in self.VALID_SELECTORS: self._build_error(selector_user, self.VALID_SELECTORS, 'valid selector') for styling_user in parser[selector]: styling = styling_user.lower() if styling not in self.VALID_STYLINGS: self._build_error(styling_user, self.VALID_STYLINGS, 'valid styling') val_user = parser[selector][styling] val = val_user.lower() if styling == 'fg' or styling == 'bg': if val not in self.VALID_COLORS: self._build_error(val_user, self.VALID_COLORS, 'valid color') else: if val not in self.VALID_BOOLEANS: self._build_error(val_user, self.VALID_BOOLEANS, 'valid boolean') val = self.VALID_BOOLEANS[val] self.styles[selector][styling] = val else: raise configparser.Error(f'{fp!r} is not a valid filepath.') def cfg_style(self, selector, text, required=False): kwargs = self.styles[selector] if required: kwargs = {**self.styles[selector], **self.styles['required']} return click.style(text, **kwargs) CONFIG = CLIConfig() q2cli-2024.5.0/q2cli/core/state.py000066400000000000000000000121571462552630000164230ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- def get_plugin_state(plugin): state = { # TODO this conversion also happens in the framework # (qiime2/plugins.py) to generate an importable module name from a # plugin's `.name` attribute. Centralize this knowledge in the # framework, ideally as a machine-friendly plugin ID (similar to # `Action.id`). 'id': plugin.name.replace('-', '_'), 'name': plugin.name, 'version': plugin.version, 'website': plugin.website, 'user_support_text': plugin.user_support_text, 'description': plugin.description, 'short_description': plugin.short_description, 'actions': {} } for id, action in plugin.actions.items(): state['actions'][id] = get_action_state(action) return state def get_action_state(action): import itertools state = { 'id': action.id, 'name': action.name, 'description': action.description, 'signature': [], 'epilog': [], 'deprecated': action.deprecated, } sig = action.signature for name, spec in itertools.chain(sig.signature_order.items(), sig.outputs.items()): data = {'name': name, 'repr': _get_type_repr(spec.qiime_type), 'ast': spec.qiime_type.to_ast()} if name in sig.inputs: type = 'input' elif name in sig.parameters: type = 'parameter' else: type = 'output' data['type'] = type if spec.has_description(): data['description'] = spec.description if spec.has_default(): data['default'] = spec.default data['metavar'] = _get_metavar(spec.qiime_type) data['multiple'], data['is_bool_flag'], data['metadata'] = \ _special_option_flags(spec.qiime_type) state['signature'].append(data) return state def _special_option_flags(type): import qiime2.sdk.util import itertools multiple = None is_bool_flag = False metadata = None style = qiime2.sdk.util.interrogate_collection_type(type) if style.style is not None: multiple = style.view.__name__ if style.style == 'simple': names = {style.members.name, } elif style.style == 'complex': names = {m.name for m in itertools.chain.from_iterable(style.members)} else: # composite or monomorphic names = {v.name for v in style.members} if 'Bool' in names: is_bool_flag = True else: # not collection expr = style.expr if expr.name == 'Metadata': multiple = 'list' metadata = 'file' elif expr.name == 'MetadataColumn': metadata = 'column' elif expr.name == 'Bool': is_bool_flag = True return multiple, is_bool_flag, metadata def _get_type_repr(type): import qiime2.sdk.util type_repr = repr(type) style = qiime2.sdk.util.interrogate_collection_type(type) if not qiime2.sdk.util.is_semantic_type(type) and \ not qiime2.sdk.util.is_union(type): if style.style is None: if style.expr.predicate is not None: type_repr = repr(style.expr.predicate) elif not type.fields: type_repr = None elif style.style == 'simple': if style.members.predicate is not None: type_repr = repr(style.members.predicate) return type_repr def _get_metavar(type): import qiime2.sdk.util name_to_var = { 'Visualization': 'VISUALIZATION', 'Int': 'INTEGER', 'Str': 'TEXT', 'Float': 'NUMBER', 'Bool': '', 'Jobs': 'NJOBS', 'Threads': 'NTHREADS', } style = qiime2.sdk.util.interrogate_collection_type(type) multiple = style.style is not None if style.style == 'simple': inner_type = style.members elif not multiple: inner_type = type else: inner_type = None if qiime2.sdk.util.is_semantic_type(type): metavar = 'ARTIFACT' elif qiime2.sdk.util.is_metadata_type(type): metavar = 'METADATA' elif style.style is not None and style.style != 'simple': metavar = 'VALUE' elif qiime2.sdk.util.is_union(type): metavar = 'VALUE' else: metavar = name_to_var[inner_type.name] if (metavar == 'NUMBER' and inner_type is not None and inner_type.predicate is not None and inner_type.predicate.template.start == 0 and inner_type.predicate.template.end == 1): metavar = 'PROPORTION' if multiple or type.name == 'Metadata': if metavar != 'TEXT' and metavar != '' and metavar != 'METADATA': metavar += 'S' metavar += '...' return metavar q2cli-2024.5.0/q2cli/core/usage.py000066400000000000000000000612501462552630000164050ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import collections import os import pkg_resources import re import shlex import textwrap from typing import Any, Callable, Dict, List, Tuple from qiime2 import ResultCollection import qiime2.sdk.usage as usage from qiime2.sdk.usage import ( UsageVariable, Usage, UsageInputs, UsageOutputs, UsageOutputNames ) from qiime2.sdk import Action from qiime2.core.archive.provenance_lib.usage_drivers import ( build_header, build_footer ) from qiime2.core.archive.provenance_lib import ProvDAG import q2cli.util as util from q2cli.core.state import get_action_state import q2cli.click.option def write_example_data(action, output_dir): for example_name, example in action.examples.items(): cli_name = util.to_cli_name(example_name) example_path = os.path.join(output_dir, cli_name) use = CLIUsage() example(use) for fn, val in use.get_example_data(): os.makedirs(example_path, exist_ok=True) path = os.path.join(example_path, fn) val.save(path) try: hint = repr(val.type) except AttributeError: hint = 'Metadata' yield hint, path def write_plugin_example_data(plugin, output_dir): for name, action in plugin.actions.items(): path = os.path.join(output_dir, util.to_cli_name(name)) yield from write_example_data(action, path) class CLIUsageVariable(usage.UsageVariable): EXT = { 'artifact': '.qza', 'artifact_collection': '/', 'visualization_collection': '/', 'visualization': '.qzv', 'metadata': '.tsv', 'column': '', 'format': '', } ELEMENT_EXT = { 'artifact_collection': EXT['artifact'], 'visualization_collection': EXT['visualization'] } @property def ext(self): return self.EXT[self.var_type] @staticmethod def to_cli_name(val): return util.to_cli_name(val) def _key_helper(self, input_path, key): if self.var_type not in self.COLLECTION_VAR_TYPES: raise KeyboardInterrupt( f'Cannot key non-collection type {self.var_type}') return "%s%s%s" % (input_path, key, self.ELEMENT_EXT[self.var_type]) def to_interface_name(self): if hasattr(self, '_q2cli_ref'): return self._q2cli_ref cli_name = '%s%s' % (self.name, self.ext) # don't disturb file names, this will break importing where QIIME 2 # relies on specific filenames being present in a dir if self.var_type not in ('format', 'column'): cli_name = self.to_cli_name(cli_name) return cli_name def assert_has_line_matching(self, path, expression, key=None): if not self.use.enable_assertions: return INDENT = self.use.INDENT input_path = self.to_interface_name() expr = shlex.quote(expression) if key: input_path = self._key_helper(input_path, key) lines = [ 'qiime dev assert-result-data %s \\' % (input_path,), INDENT + '--zip-data-path %s \\' % (path,), INDENT + '--expression %s' % (expr,), ] self.use.recorder.extend(lines) def assert_output_type(self, semantic_type, key=None): if not self.use.enable_assertions: return INDENT = self.use.INDENT input_path = self.to_interface_name() if key: input_path = self._key_helper(input_path, key) lines = [ 'qiime dev assert-result-type %s \\' % (input_path,), INDENT + '--qiime-type %s' % (str(semantic_type),), ] self.use.recorder.extend(lines) class CLIUsage(usage.Usage): INDENT = ' ' * 2 def __init__(self, enable_assertions=False, action_collection_size=None): super().__init__() self.recorder = [] self.init_data = [] self.enable_assertions = enable_assertions self.action_collection_size = action_collection_size self.output_dir_counter = collections.defaultdict(int) def usage_variable(self, name, factory, var_type): return CLIUsageVariable(name, factory, var_type, self) def render(self, flush=False): rendered = '\n'.join(self.recorder) if flush: self.recorder = [] self.init_data = [] return rendered def init_artifact(self, name, factory): variable = super().init_artifact(name, factory) self.init_data.append(variable) return variable def init_artifact_collection(self, name, factory): variable = super().init_artifact_collection(name, factory) self.init_data.append(variable) return variable def construct_artifact_collection(self, name, members): variable = super().construct_artifact_collection( name, members ) rc_dir = variable.to_interface_name() keys = members.keys() names = [name.to_interface_name() for name in members.values()] keys_arg = '( ' for key in keys: keys_arg += f'{key} ' keys_arg += ')' names_arg = '( ' for name in names: names_arg += f'{name} ' names_arg += ')' lines = [ '## constructing result collection ##', f'rc_name={rc_dir}', 'ext=.qza', f'keys={keys_arg}', f'names={names_arg}', 'construct_result_collection', '##', ] self.recorder.extend(lines) return variable def get_artifact_collection_member(self, name, variable, key): accessed_variable = super().get_artifact_collection_member( name, variable, key ) rc_dir = variable.to_interface_name() member_fp = os.path.join(rc_dir, f'{key}.qza') lines = [ '## accessing result collection member ##', f'ln -s {member_fp} {accessed_variable.to_interface_name()}', '##', ] self.recorder.extend(lines) return variable def import_from_format(self, name, semantic_type, variable, view_type=None): imported_var = super().import_from_format( name, semantic_type, variable, view_type=view_type) in_fp = variable.to_interface_name() out_fp = imported_var.to_interface_name() lines = [ 'qiime tools import \\', self.INDENT + '--type %r \\' % (semantic_type,) ] if view_type is not None: if type(view_type) is not str: view_type = view_type.__name__ lines.append(self.INDENT + '--input-format %s \\' % (view_type,)) lines += [ self.INDENT + '--input-path %s \\' % (in_fp,), self.INDENT + '--output-path %s' % (out_fp,), ] self.recorder.extend(lines) return imported_var def init_format(self, name, factory, ext=None): if ext is not None: name = '%s.%s' % (name, ext.lstrip('.')) variable = super().init_format(name, factory, ext=ext) self.init_data.append(variable) return variable def init_metadata(self, name, factory): variable = super().init_metadata(name, factory) self.init_data.append(variable) return variable def comment(self, text): self.recorder += ['# ' + ln for ln in textwrap.wrap(text, width=74)] def peek(self, variable): var_name = variable.to_interface_name() self.recorder.append('qiime tools peek %s' % var_name) def merge_metadata(self, name, *variables): var = super().merge_metadata(name, *variables) # this is our special "short-circuit" attr to handle special-case # .to_interface_name() needs var._q2cli_ref = ' '.join(v.to_interface_name() for v in variables) return var def get_metadata_column(self, name, column_name, variable): var = super().get_metadata_column(name, column_name, variable) # this is our special "short-circuit" attr to handle special-case # .to_interface_name() needs var._q2cli_ref = (variable.to_interface_name(), column_name) return var def view_as_metadata(self, name, variable): # use the given name so that namespace behaves as expected, # then overwrite it because viewing is a no-op in q2cli var = super().view_as_metadata(name, variable) # preserve the original interface name of the QZA as this will be # implicitly converted to metadata when executed. var._q2cli_ref = variable.to_interface_name() return var def action(self, action, inputs, outputs): variables = super().action(action, inputs, outputs) vars_dict = variables._asdict() plugin_name = util.to_cli_name(action.plugin_id) action_name = util.to_cli_name(action.action_id) self.recorder.append('qiime %s %s \\' % (plugin_name, action_name)) action_f = action.get_action() action_state = get_action_state(action_f) ins = inputs.map_variables(lambda v: v.to_interface_name()) outs = {k: v.to_interface_name() for k, v in vars_dict.items()} signature = {s['name']: s for s in action_state['signature']} for param_name, value in ins.items(): self._append_action_line(signature, param_name, value) max_collection_size = self.action_collection_size if max_collection_size is not None and len(outs) > max_collection_size: dir_name = self._build_output_dir_name(plugin_name, action_name) self.recorder.append( self.INDENT + '--output-dir %s \\' % (dir_name)) self._rename_outputs(vars_dict, dir_name) else: for param_name, value in outs.items(): self._append_action_line(signature, param_name, value) self.recorder[-1] = self.recorder[-1][:-2] # remove trailing \ return variables def _build_output_dir_name(self, plugin_name, action_name): base_name = '%s-%s' % (plugin_name, action_name) self.output_dir_counter[base_name] += 1 current_inc = self.output_dir_counter[base_name] if current_inc == 1: return base_name return '%s-%d' % (base_name, current_inc) def _rename_outputs(self, vars_dict, dir_name): for signature_name, variable in vars_dict.items(): name = '%s%s' % (signature_name, variable.ext) variable._q2cli_ref = os.path.join(dir_name, name) def _append_action_line(self, signature, param_name, value): param_state = signature[param_name] if value is not None: for opt, val in self._make_param(value, param_state): line = self.INDENT + opt if val is not None: line += ' ' + val line += ' \\' self.recorder.append(line) def _make_param(self, value, state): state = state.copy() type_ = state.pop('type') opt = q2cli.click.option.GeneratedOption(prefix=type_[0], **state) option = opt.opts[0] # INPUTS AND OUTPUTS if type_ in ('input', 'output'): if isinstance(value, str): return [(option, value)] else: if isinstance(value, set): value = sorted(value) return [(option, ' '.join(value))] # METADATA FILE if state['metadata'] == 'file': return [(option, value)] # METADATA COLUMN if state['metadata'] == 'column': # md cols are special, we have pre-computed the interface-specific # names and stashed them in an attr, so unpack to get the values fn, col_name = value return [(option, fn), (opt.q2_extra_opts[0], col_name)] # PARAMETERS if type_ == 'parameter': if isinstance(value, set): value = [shlex.quote(str(v)) for v in value] return [(option, ' '.join(sorted(value)))] if isinstance(value, list): return [(option, ' '.join(shlex.quote(str(v)) for v in value))] if isinstance(value, (dict, ResultCollection)): return [(option, ' '.join(f'{k}:{shlex.quote(str(v))}' for k, v in value.items()))] if type(value) is bool: if state['ast']['type'] == 'expression': if value: return [(option, None)] else: return [(opt.secondary_opts[0], None)] else: # This is a more complicated param that can't be expressed # as a typical `--p-foo/--p-no-foo` so default to baseline # parameter handling behavior. pass if type(value) is str: return [(option, shlex.quote(value))] return [(option, str(value))] raise Exception('Something went terribly wrong!') def get_example_data(self): for val in self.init_data: yield val.to_interface_name(), val.execute() class ReplayCLIUsageVariable(CLIUsageVariable): def to_interface_name(self): ''' Differs from parent method in that metadata is not kebab-cased, filepaths are preserved instead. ''' if hasattr(self, '_q2cli_ref'): return self._q2cli_ref cli_name = '%s%s' % (self.name, self.ext) # don't disturb file names, this will break importing where QIIME 2 # relies on specific filenames being present in a dir if self.var_type not in ('format', 'column', 'metadata'): cli_name = self.to_cli_name(cli_name) return cli_name class ReplayCLIUsage(CLIUsage): shebang = '#!/usr/bin/env bash' header_boundary = ('#' * 79) set_ex = [ '# This tells bash to -e exit immediately if a command fails', '# and -x show all commands in stdout so you can track progress', 'set -e -x', '' ] copyright = pkg_resources.resource_string( __package__, 'assets/copyright_note.txt' ).decode('utf-8').split('\n') how_to = pkg_resources.resource_string( __package__, 'assets/cli_howto.txt' ).decode('utf-8').split('\n') def __init__(self, enable_assertions=False, action_collection_size=None): ''' Identical to parent but creates header and footer attributes. Parameters ---------- enable_assertions : bool Whether to render has-line-matching and output type assertions. action_collection_size : int The number of outputs returned by an action above which outputs are grouped into and accessed from an --output-dir. ''' super().__init__() self.header = [] self.footer = [] self.enable_assertions = enable_assertions self.action_collection_size = action_collection_size def usage_variable(self, name, factory, var_type): return ReplayCLIUsageVariable(name, factory, var_type, self) def _append_action_line(self, signature, param_name: str, value): ''' Extends the parent method to accommodate action signatures that may differ between those found in provenance and those accessible in the currently executing environment. Parameters ---------- signature : dict of str -> dict Mapping of name of signature item to dict of signature spec data. param_name : str The name of the parameter for which to render a CLI line. value : any The value of an item from a Results. ''' param_state = signature.get(param_name) if param_state is not None: for opt, val in self._make_param(value, param_state): line = self.INDENT + opt if val is not None: line += ' ' + val line += ' \\' self.recorder.append(line) else: # no matching param name line = self.INDENT + ( '# FIXME: The following parameter name was not found in ' 'your current\n # QIIME 2 environment. This may occur ' 'when the plugin version you have\n # installed does not ' 'match the version used in the original analysis.\n # ' 'Please see the docs and correct the parameter name ' 'before running.\n') cli_name = re.sub('_', '-', param_name) line += self.INDENT + '--?-' + cli_name + ' ' + str(value) line += ' \\' self.recorder.append(line) def _make_param(self, value: Any, state: Dict) -> List[Tuple]: ''' Wraps metadata filenames in <> to force users to replace them. Parameters ---------- value : any A value of a Result. state : dict A collection of info about an item from an action signature. See q2cli.core.state.py. Returns ------- list of tuple See q2cli.core.usage.CLIUsage._make_param for possible outputs. ''' if state['metadata'] == 'column': value = (f'{value[0]}', *value[1:]) if state['metadata'] == 'file': value = f'{value}' return super()._make_param(value, state) def import_from_format( self, name: str, semantic_type: str, variable: UsageVariable, view_type: Any = None ) -> UsageVariable: ''' Identical to super.import_from_format, but writes --input-path and follows import block with a blank line. Parameters ---------- name : str The name of the created UsageVariable. semantic_type : str The semantic type of the created UsageVariable. variable : UsageVariable A UsageVariable of some format type that will materialize the data to be imported. view_type : format or str The view type for importing. Returns ------- UsageVariable Of type artifact. ''' # need the super().super() here, so pass self to super imported_var = Usage.import_from_format( self, name, semantic_type, variable, view_type=view_type ) out_fp = imported_var.to_interface_name() lines = [ 'qiime tools import \\', self.INDENT + '--type %r \\' % (semantic_type,) ] if view_type is not None: lines.append( self.INDENT + '--input-format %s \\' % (view_type,) ) lines += [ self.INDENT + '--input-path \\', self.INDENT + '--output-path %s' % (out_fp,), ] lines.append('') self.recorder.extend(lines) return imported_var def init_metadata( self, name: str, factory: Callable, dumped_md_fn: str = '' ) -> UsageVariable: ''' Like parent, but appropriately handles filepaths for recorded md fps. Parameters ---------- name : str The name of the UsageVariable to be created. factory : Callable The factory responsible for generating the realized value of the UsageVariable. Returns ------- UsageVariable Of type metadata. ''' variable = super().init_metadata(name, factory) self.init_data.append(variable) if dumped_md_fn: variable.name = f'"{dumped_md_fn}.tsv"' else: variable.name = '' return variable def comment(self, text): ''' Identical to parent method, but pads comments with an extra newline. ''' super().comment(text) self.recorder.append('') def action( self, action: Action, inputs: UsageInputs, outputs: UsageOutputNames ) -> UsageOutputs: ''' Overrides parent method to fill in missing outputlines from action_f.signature. Also pads actions with an extra newline. Parameters ---------- action : Action The underlying sdk.Action object. inputs : UsageInputs Mapping of parameter names to arguments for the action. outputs : UsageOutputNames Mapping of registered output names to usage variable names. Returns ------- UsageOutputs The results returned by the action. ''' variables = Usage.action(self, action, inputs, outputs) vars_dict = variables._asdict() # get registered collection of output names so we don't miss any action_f = action.get_action() missing_outputs = {} for output in action_f.signature.outputs: try: # If we get a match on output-name, the correct pair is already # in vars_dict and we can continue getattr(variables, output) except AttributeError: # Otherwise, we should add filler values to missing_outputs missing_outputs[output] = f'XX_{output}' plugin_name = q2cli.util.to_cli_name(action.plugin_id) action_name = q2cli.util.to_cli_name(action.action_id) self.recorder.append('qiime %s %s \\' % (plugin_name, action_name)) action_f = action.get_action() action_state = get_action_state(action_f) ins = inputs.map_variables(lambda v: v.to_interface_name()) outs = {k: v.to_interface_name() for k, v in vars_dict.items()} outs.update(missing_outputs) signature = {s['name']: s for s in action_state['signature']} for param_name, value in ins.items(): self._append_action_line(signature, param_name, value) max_collection_size = self.action_collection_size if max_collection_size is not None and len(outs) > max_collection_size: dir_name = self._build_output_dir_name(plugin_name, action_name) self.recorder.append( self.INDENT + '--output-dir %s \\' % (dir_name) ) self._rename_outputs(vars_dict, dir_name) else: for param_name, value in outs.items(): self._append_action_line(signature, param_name, value) self.recorder[-1] = self.recorder[-1][:-2] # remove trailing `\` self.recorder.append('') return variables def render(self, flush=False): ''' Return a newline-seperated string of CLI script. Parameters ---------- flush : bool Whether to 'flush' the current code. Importantly, this will clear the top-line imports for future invocations. Returns ------- str The rendered string of CLI code. ''' if self.header: self.header = self.header + [''] if self.footer: self.footer = [''] + self.footer rendered = '\n'.join( self.header + self.set_ex + self.recorder + self.footer ) if flush: self.header = [] self.footer = [] self.recorder = [] self.init_data = [] return rendered def build_header(self): '''Constructs a renderable header from its components.''' self.header.extend(build_header( self.shebang, self.header_boundary, self.copyright, self.how_to )) # for creating result collections in bash bash_rc_function = [ 'construct_result_collection () {', '\tmkdir $rc_name', '\ttouch $rc_name.order', '\tfor key in "${keys[@]}"; do', '\t\techo $key >> $rc_name.order', '\tdone', '\tfor i in "${!keys[@]}"; do', '\t\tln -s ../"${names[i]}" $rc_name"${keys[i]}"$ext', '\tdone', '}' ] self.header.extend([ '## function to create result collections ##', *bash_rc_function, '##', ]) def build_footer(self, dag: ProvDAG): ''' Constructs a renderable footer using the terminal uuids of a ProvDAG. ''' self.footer.extend(build_footer(dag, self.header_boundary)) q2cli-2024.5.0/q2cli/tests/000077500000000000000000000000001462552630000151355ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/tests/__init__.py000066400000000000000000000005351462552630000172510ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- q2cli-2024.5.0/q2cli/tests/data/000077500000000000000000000000001462552630000160465ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/tests/data/concatenated_ints.qza000066400000000000000000000467041462552630000222630ustar00rootroot00000000000000PK$Vx?Ua24726da10-df52-4e17-afb8-465a0bee7be7/metadata.yaml+-LR0172KI44MI355I54MLK513M4HJM5OJ5*,HR+ N-,MKN5J/M,At,JM./tqPK$VӘ24726da10-df52-4e17-afb8-465a0bee7be7/checksums.md5S˪TA W83H'pFA}N`;WDԋnR]IU%j9clY>z7ơ9w#.˃o2dWyxM@*`؝ZS-"-˭tle;_;/r1?l "Rf0􈪖dMzԳ`_Iɀ{H(IB)|qiTy1SE2 .+PUҊ- 5(=+2U ^n6OɅ(q!A`S :k0?n}͹XcIUhf٦\;3m0AŽgv4MSLesM Brަ<`*ժ0xjAJ8Kɦ1ƦoPK$V\97,4726da10-df52-4e17-afb8-465a0bee7be7/VERSION uU0J,J,KR0J+JM-/ʶR020233KI-36K765O40PK$Vx?Ua=4726da10-df52-4e17-afb8-465a0bee7be7/provenance/metadata.yaml+-LR0172KI44MI355I54MLK513M4HJM5OJ5*,HR+ N-,MKN5J/M,At,JM./tqPK$Vnt}=4726da10-df52-4e17-afb8-465a0bee7be7/provenance/citations.bibWv+,BҤuNֈ{MNdDm<(ScLdţe,,xnݺU AB|hJuxw0w>;yWi~=rUH1R%%u5 vGu4 )5fMȹ?^4륗:%jIҗLqqRa'.IfFpD$/.kxgDϛ^ I60?18^d?oW2i/so4Op>mtS7\ nRe' _׭$"o[&T}du+o@dTT{$Ey˯U"$X{K1~xe (D&8-dbFCk6H~=n?n:Cv[ýf0"=n R,h?aTjtg<:4A!3e;;_ ]F/QQ(QT|wHF#O&d:y>! 4z~]- O\nwM(tOgDR/xr 𠿁`0ޤ3pJ*WP.Am0799}D2dH $j#@r0-Do! &M~N r|lȸ*^U~{ wpxZTW*X z-[*_,1.@eTrTBz汹Dgw`Wܘ׊ZNe#B&pZ5nvWn!jl.ы}W¡m1gp&6++EqA@b} Ũ ӃOYvv`CǤWAR=[X%\/ٟ! it;@%*2oSVh!?bo5TlBşý-ݐhjnlGpu?#wcn:1mE lJ77sS?AP+BS~E2IGu _c)S|!14 tjbJa9[lDgdp~Hi*:a)0GmSk/." Q-̇6m yH-M3@~= daPK$V\9774726da10-df52-4e17-afb8-465a0bee7be7/provenance/VERSION uU0J,J,KR0J+JM-/ʶR020233KI-36K765O40PK怯V+Ual4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/metadata.yaml+-LR06H3JI6MIKM517MM4H2535HI66N37J2*,HR+ N-,MKN5J/M,At,JM./tqPK怯V3y[>l4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/citations.bibWv7+Y I^93 mETػ" vDm<(b,dGӦl$%(ܺu'4TmV>Q񝣝΋^[ur+/ż찗%Y6Jߌfc>ɸ^$;43+n: qsv_~,EcY% 9s$aoIIXR6 w"4'EɘʼnBAۈTcSu [ݽ s4Rŵ<+R'%Y-OtdzJȆݤNM\S:p)6[ŸU̩ҲFz8;8CTm5И`I$8|=Q(HWFYhXr{+ 5xzu/hS`JnjyTkC$w$<Io5KN1V+裯 %i72RU'ous tkQP Y(>٠.im^ʊ2&۝H oRR"k!s)>/^9JP^>sv^dbEø+ʘe?UK!ɭ@i@ȠǭRߑ̣?>#@*v-r73Ι2KЯ"<9= -*A%-1y J?ABWɖU|C)M9 0/y8Ѡs M;4\yz@ײ Ie<_G5Iл2FVȶ!+q&;]^kA"=vaxMSB0r As;;<n?v~k$d_ǵCJK]ĽKXqk*@Bx tSY];TYYx?uٰUbnL芳 0^964&gl'M tGϟ=ZK_^@TTx>?{C;B}ZjᨏŔd)Msɦ #B7+{[\wpA?ʴ URVbf(%|)#[)?_yW6aUE/:[I{ ύik1w:ZYcl#xޙ7CQCe>f "E;^ݲ[ZlSK47Il>`%e7F #Rn{P)FOs =ٍ ۑ m:n Mp0uB7kܲ$tr1<˯`&{?Tn')SC] rxV~`?.C1;V!T[)8 t{ۤgbp |DQy@az \1[=ý?PK怯V\97f4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/VERSION uU0J,J,KR0J+JM-/ʶR020233KI-36K765O40PK怯Vͦcv fq4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/action/action.yamlXێ}߯&A.C` 1l (Rs5"5ǧT DUJݝ>Dkeuiw1;fY-$>66)x-DΏN};+[_Ʋ,88%Som~H7wEۯq`(]J>h#*v)mXgj=b0^Gۏ??STRYٙO(:.V![z56Lfu '9=y9UcN9e(7*b)O_㿾mOR<[}~`m=E-\4o =cwCaїgQEa>Y [5$DFe~e.F)/|(݂E^z,}8 >(_M1Ϣ4St7{l?X}St~bpyIOؐ :5xbG?;o3g@ d29" r*&˂=֙lrBl7<4^ T́tż1=yť*5)eT@GX4*[}JF?_*ɽytݢx9r:*V&Vsƪ *0H`K֣2z4Ҋk#|r-E(z)N AjKxqU9ID$.rEY/wqH\n>ظC{|Aw StĵP7dmרFa8Ԅ3^j6ďѲab٤O\ ZYR.4:6\xɡF#k|N,'V d஖<$%qNJ sAEy XXZ@(,-bqXVW`FieԺ녁̏V!G&C'ԣ_ĤFr4.Gr| ii6J~a09d.Mp/C_M]6o2 hCNe&Z3.-zTT(1tBaV@Fa Ft/eEϲjdmc'<趦ABFt;vA%9M3jP8^vZp=I|+f ~yCڿ뺃`t]P8;3SDZO`.qu5NzJL;'kVZ_凞zlβH\ ߍmR XAEp_,5,g0Qjb0k T)C='XPp,0(a8$~ KUjqCxJggOAi %PZۂXi6.zׅY#Ntc=4W(wLkP?ݧO?ͮDuHx/Av#XY3~/;upC :A}s?ls\ZhC*s$X4Haٸ;\gd6WJ7 h 8cnL8B;^ߓKĮɼwojԂ!}E#ʖY‚OZ;;Lf{v}ݍi/X!ʖy^( |sv¡`w`=PHo(OoʅEu8,3%wU"dp k)v}'* yIn(j1J.+6?P CVXJJ\>Ql3;^9`Fa9YiBv牱C""yu\6Gɺ܉=|1H= 3͈KH<Ě ˦6v:ʏ XՃ+ܪ5q}d~!i.Pq Zy ArEmX;0̌5TܠfdItw+kr<)8)0[O+YU Sm]ɓfyWJ^,{m8(@ ;p/!c?pI&-뗛Fy눳%r^M&B9 IE):8ſ:Ee&Da4 g!XԂy~{$0=yw;9-=ej{w^p^c)ܧsBguAOJPD:.]Cy4'P3ww_Wz)R{)K1x5 vwwz;Xt?z{׾0plZ;[q͸٭̅}]qnyR)Q}!NedVt;_fZ`J%>Ir=/sJ6Sv⓸vVf*uY/xIhYBGNˌcזLD5^7~1,Ez/v^+eV|,6ɤSX%uԱ5A]px!>#/Fpgj&_ i"z= v'nG!pPFᑅGU2j>5JSLC*A'B{capٔUP|\/E3.d,rk܁EtÆU௤VWI/kg4:7{]0(rMHSpA;LHsqpV=w{ {'s\F4ޙ1f U%==W>=\]&3'徫uQ7 2^|u"=< fMB1$iєz1NzJ)׍[$QAd\U}(fI$Z1v)Wn@Yd[0vH*~ow^ z 0h?𣁑"[Ǎ(k.4bH_zt$dBUSgrFaWjg^a1R#>!u9JY3,3aʋLQrrIm{%7ϼ2.fA5<Ӌ^r+U/1rݍ/zq{]T[7ap<m!_rho{AX"c9<|)#[:[=KM m/b C%g98hoS{'3oR?բ9e[u(<*ɠMFsmZawDNB՚[u+]p 66HK!&+fT8{U|/OXnp"Z3wFulc'M£"5+JY5m #^xőr~9xG7b(̠bugB"p+ (vm"rarpXgEYh䪍0=8d 7}SW-<π`ݜS' :" TZ9 24A Mw{-}QZX>3@b"wȮAps^G`OQ3wOw6M-cN悉`muܣIe9gy 5``GŪ\'Ɋߊ{nw.,ՠ=9(JRtՌj+lo­+7utK@ s*tvkqZG|!Y󭴶_&[΄&\[Rc?b;͔iPPF`Nl%v(Gh5`pؑ,x*ܘ.V~URd! WrV{ t/L9ѱ/Y"L,PH}ahΓ6lYnCCl=R|[6Q@&]ȴuF۲Ѡt-AWVt'7G~ (r86@EQ-{-%%%Ϙf6ADZ"C_x9]?QTǠ:CYnm0/4<5?UΈo6l0 4vagm~>J$(zE;wt]yVҼT]Xaj0wwf`A;M?~ՃjA1 iDRhs;Ԩc|#JG1Ҩa db/Fm y(յ3G'jl)t}W cK;X=1+mgXǯ2ktGDGؽ#;w.#AJ/am%mdNX7Mkг^`b!lp5>A}/쏷A -ZR+ٚlv G6Mׇ(i>BCɘ^A@N6`*Fÿx3׺G 8۪= aNc)U;{1} g )18l??[&Yʳ&&˯lCqj0bG?;n4L ʺI(2}l7[)$ M, @19. ˺kɋ,,DWV)J*aqPȽ;nԙw6* L!ẸL3 aɤPZG7AETRrqHrz&$g@ Qj5[\Eag >9]"`|'9p 2j xq9ID\0$rB ^rqH> ` ]zH*TP7ddmרFa8ԈW&5VhXbلWNh1Uظ3rp87&B5.tQ'; 8[u 3,[58g(@ w>.K{`[,5((!J9j PEC% 3bK-ٙkRckC ZnBCPkZ^@ ?J }Sf\eʉITŨC]-B5,l(k~'圪Wuv^Rz1c*C6Fn [@5 C@=Z4Ed(_BIN@vԢAC#OZ5Z/vP(|[M<+6 ^R(ȟDpkdmh&JW {/ *|F3+=LCi,X}xZ@]eѵ`}?b%^!3k=Ɋk&D+gnl/ 0{[hzHJ"#L%0ak!̰so[؅=&%_T8}CNn& Bڲd_wF t[ \#vr^3bPX^lwZ3$>]QJ5~qEڿ몃+]qm;pvf4>}:#2CRR;zx(1Uiޓ5 Θ;jhwc[ئV]P1V˷>l~yz>5gZqVkvJJ+G' F[> < Õ_15Л1Tr{n Z iv0dyvv;P~LJ7!vѳ0.$B`fu\1Aw>2I*@$~ +ɽ(&"G|iS! [96+U*R HEcD@(l"kɩϹ{Mf{U(UӸA 9h$!pƽ3Mial(]}pίYB|o(-8Nh̫z7JFUZߣlXa!~dDd`C8`xlj{]뫶H`ŒRolU~W7m' ~zC .1޺ T^_k~S575'ׯʆ*g]sdhLUދl)&卨$-*%/|u(n4M!TrQi T8Za)eP!`H{f9.@q `rԼm$\s#J?0c?O:z9NԵNᛌ@1l8lF5\B#\[v:.TB+7SJfNgUFY Sur/ 1(\5-9YWyr\52m'D ohG*2R+mnsM'yY%&tH{U`IK=>:1ƝJy4)W¼dKAR{iw230oa Dhز~4[G+0E b2-BT`iN-\S܋]TaBLc_~܂uK-׮EA`}g\ӀC4M^λs(o5}zX?> ,D3; *R;֕1?PK$VFd? !B4726da10-df52-4e17-afb8-465a0bee7be7/provenance/action/action.yamlYmܶ_|6zYi_>(hl("$JK/%Dnu ] uk 8#r^7C~'q5 KVBت ]k6J}/ RV -x5VG5>@;Vn2Pg?&v!Jncp2~Z Uٻ}x|kj@!d͋[<3aNc)U;{1?ΐ}WF8ÀpRmr%.3ݴ߲& h6#AXb>2ʆؿg57$6euPu-ctWYn_]!A~75x"# 74L ʺI(2}l7[)$fM, @1{*R]uiɋ,,DWV)J*'>⠐[3nԙw6* ~!ẸL0˙db(Yj0zSTD%u(1,wĪ(7x`B[pVcEŞX@i>Ӆj.B vЋw 0)x- 'bPF~B(&U`T %,zEbs(,OPW't2\| J%Mm79Yo5pP[S#j^5Ox?F*dF<[ʊr1ձq F79بyui:I8٪cx#LB^]5U–/%ﱢ7m\ɲdOS_l1CʇVZ\n^ZH>3iv!kOɥd W/uDƂ,YםgV7H(׈n gacv}tNd9vRPb9#kJgy=':$.ƶM;V]P1V˷.l~yz7gc)'mw,0H8$~ sUjF<&{γܩ LʹX|LJ7!vѳ0.$B`f롹Bcr]=޿ev=&RUIgdW{Q LE5S'(һO]s2/Էr.mVt÷Ti q?P@E&N'B?.7bUTM-B<4ZO=fǽOB܍{g3|=$4Q+vY}p/YB|ӯ(58|Y'ME~Q ie*z-(VXȯ{:%>m4X3oʲݕh/X!̖iZ9/ |qv¡` W`ݏPHPy}k(jO/ʆjvyguzˬ*v^4gN/m0.D <;oQq/_~Vס.4=PEʃP K)ӥnDDB3q+S}h#)lNF9PybwйH\,ףQww|dRa g3b;ye 1n'಩N%άrc>d4pVJi>1r/ 1(\5-{Ӝr${6U)Q1H jF6纔J{B&œì mu}U`IK=>:1Ɲޕ4hSxy Ȓ6iч;~ ['O"4ilY4[G+0E l2-BX`iN-\S܋]TaBLc_~܂uK-׮EA`}g\ӀC4t}.ݢX ~\$OpaPA@GԧޱyPK$V׏ 24726da10-df52-4e17-afb8-465a0bee7be7/data/ints.txt3222D"́PK$Vx?Ua24726da10-df52-4e17-afb8-465a0bee7be7/metadata.yamlPK$VӘ24726da10-df52-4e17-afb8-465a0bee7be7/checksums.md5PK$V\97,4726da10-df52-4e17-afb8-465a0bee7be7/VERSIONPK$Vx?Ua=4726da10-df52-4e17-afb8-465a0bee7be7/provenance/metadata.yamlPK$Vnt}=4726da10-df52-4e17-afb8-465a0bee7be7/provenance/citations.bibPK$V\977 4726da10-df52-4e17-afb8-465a0bee7be7/provenance/VERSIONPK怯V+Ual& 4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/metadata.yamlPK怯V3y[>l4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/citations.bibPK怯V\97f4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/VERSIONPK怯Vͦcv fqD4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/368f2dc6-dfea-475e-a0b7-650dc33f72b7/action/action.yamlPKVTtRTclI"4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/dbce99c9-1150-48bf-a9c9-67af393a7d40/metadata.yamlPKV3, /l'#4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/dbce99c9-1150-48bf-a9c9-67af393a7d40/citations.bibPKV\97f-4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/dbce99c9-1150-48bf-a9c9-67af393a7d40/VERSIONPKVH 6 q.4726da10-df52-4e17-afb8-465a0bee7be7/provenance/artifacts/dbce99c9-1150-48bf-a9c9-67af393a7d40/action/action.yamlPK$VFd? !B94726da10-df52-4e17-afb8-465a0bee7be7/provenance/action/action.yamlPK$V׏ 2^E4726da10-df52-4e17-afb8-465a0bee7be7/data/ints.txtPKEq2cli-2024.5.0/q2cli/tests/data/mapping_config.toml000066400000000000000000000003541462552630000217250ustar00rootroot00000000000000[parsl] strategy = "None" [[parsl.executors]] class = "ThreadPoolExecutor" label = "default" max_threads = 1 [[parsl.executors]] class = "_TEST_EXECUTOR_" label = "test" max_threads = 1 [parsl.executor_mapping] list_of_ints = "test" q2cli-2024.5.0/q2cli/tests/data/parse_dir_test/000077500000000000000000000000001462552630000210555ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/tests/data/parse_dir_test/inner/000077500000000000000000000000001462552630000221705ustar00rootroot00000000000000q2cli-2024.5.0/q2cli/tests/data/parse_dir_test/inner/right_ints.qza000066400000000000000000000333231462552630000250630ustar00rootroot00000000000000PKtVV}Wc257cbec31-a778-4f2e-be6b-7b4dd3ee98d7/metadata.yaml+-LR05ONJM66M475I3JMJ5K5O2II1NMH1*,HR+ N-,MKN5J/M,A 3r,JM./trPKuVVѺ: 257cbec31-a778-4f2e-be6b-7b4dd3ee98d7/checksums.md5ͪT1 f&Iyp>IS9+3ѷr2@ɯ6ڂKW$ Dz||zͻڣJ]'UheyMnzϟ KXge8[d+*پ8\ge&e}"Iglyi+{J(❶&@R$N[`qhԍQvAvȕ@w|}-^kQ öjlWF [- ǥT+vq6w;>7eBn]ZἐcW3G^X:I K:o#<"N8yrX.NKxf" ;3y0ZBk9Op'=y 'r636OןGiIjWfotBXw  am$Ɯ%} 6xB6F>ʕX5p89 ǩ۲vd tا6gduGצ.J\=a1cw2cJdrW|5N~v56s?O1Kwue4m#{ن^$XZ׌LtiQxvC*Lj`\vB+99mCԈ_H} /9</JkNO<(~4cS/‡W6C B]&7!㾫uQ7 r^|u"=" nMB1{()8Tz1FzJ)W[$$%QVd\=(&}'USP\%;c:/Sy €;bat]r{owFAaN3!;'sE# Qft.4bH_ztۻ$dWBWSgrFiWjgQa9R#>t9JY3,G0aڋLQlrHm{[@B{=U` Eo Ϟ}td,WJ zR\Vؖ[uHDi&bEtBZ`ruw{8hA ] FW 4+XG/=GjW`p{f'U-S9s65]zZr&2lu[]a{,Qc6(?!s+`۷X1`(~Yko4^]~-^\5H-%nG 6ml:i!KjSS|ogOj32bTp{gFj~A0-0'|/}SbVSvI}x [sv_N[M T8bfW7 H\fr׃/f\آ$?Nf fgk5;?W|8,nNmKlp(8 Rs0W%Noӭ{Z63! Wt3M`t息 ](lUNxsr|N3Q;yK65ڨɣ)8Md#/cyNVd-{-/,=7WRt_+9jE2T'c7GrVKx'd ^ !~8]ӱ2- *U ܉Ne= o ZLj@Y[]ΏJ,dpŵsS/i+nڑ)ѡ^t(\٨y>BEc(S9p^x-wQ`TJ`EVޙ(rD:nL]`v|:%Mn⛬0Rt9x\$}+6TpL'52fFƷghN;%׈-9Or NٻF3Dt Q# |k>H= fpDy rT}@ >,k у|rcE;_u32 (s+ƮBՕI%W _Au{ި;1:: G$h`dH{q㡶6,ں. 8׊/ {$dBUSgrFaWjg^a1R#>!u9JY3,3aʋLQrrIm{%7ϼ2.fàEo O/^|TReƗWbU~x #(.,ZXvGh *0P :3ˣ7"?%3$ؔ@.fΐ`pІyp9/#(登;ݖ1'sD06a/^ [6gun{4鏿,G[,O |XUkТ$Y[`pyԡ'G=_IcCJq@mTMUb.shbAtXb-Nhv//kvd+ؙބkKjr.TSr:V&$ mSU8xS?/'چ9Q9K4ڸNDՃ1@[Cd +byNd˦זk4n=]qQ.o>mnd<7G}s@@??lty|Nc3eZ,4T(,[uI{Q!ZeM\'vd- itw 7f8gYxi⭜^yb :SrFft c% =RG_ddO!v\J6nc>YI-e(yb .Qd:#XFl4t𙙢eb:ʊD/R#a'Hم\,Ğ#R&fԤ-/VPK#SVv 53f57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/VERSION uU0J,J,KR0J+JM-/ʶR0202336KO205611PK#SVꌷ 4 q57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/action/action.yamlY[ܶ~_EEԲJK&C|o]{ў}o *F_Ug:* ᩆ5?AyE?DH6E}kt &mf-Q<׊gjSQf|VMlz/4":YۛTlJO&Xܓv/zG*7 4FFP*NL n1$0+-"ܚD<(URAZtjerͪڎ7:ZItNtNKV|s; V"Z) gJNe<BJRDTu>gdc@tIUgǻUJ_ZDDZ3N ad53'|hOd2=9n :U֫5=N *u%GTqKTLKբkc(Tgc:YQ(W[&XoIlqSYJ&z+dK|RVz- ̈K`D<Ҩ_#Ҁ,J/=ӆkd"] Xܓt(-| 8y&;e0H qNLqJCTU'%+:2: RRAdjKv"V w&T!tj4OX>e5<Rɀ8V;T˚rp.ӡDp c2Lψ?xn4^_Zpʉu0@4B\^spP=F6pt (-PˈfHbT0H6jQ3/v뇀0J*#3k~к4BG=A2XhT`+͞Llt\De|MO |Tsϥ4 40}+@ꕱC~B-GZ.C4@() dQ4h GVq_ϭt]4G3($FYtj J^]W4^*BϔXtRt0*Vr_ 5X"{lMН571 3"bap{i&wg?]YIf ֚y @-+y^513v|l[B \\ ~@$*ý;rfy3wCt=Ih3-]ЙNSNE!k-Y|`K3<8U@rmS. =g5KznmAP֌.z#’a03!tCH4iП,tC,q>)ysswӌK 5ġ{hKw ()]S Z-:ߧ$;*qW3x6#9É/5(YV:xc̵]hVלoy:1ޣIj9.uā{ϗx7'? fu[L1 Weo? g;ĆIvo(wei i玜Xy~M v0 6r~GD[FT9 {?n4Vf+ `~Z;(ۄ$.nFlj9'#`un Bc=q(Śn@:^2 õ,Ğ4guTV.?:Vd~'_SZ3Prn=9WuqJnoۧl)fk|G%ȇf`v1'eJ~QR 16k?4ѳ֮c4OˡdL.0C({+}@316 د`5 C9<7[s("dS@:fi=˞3/gtبkk*Z#t^k|V4 zT4/2.+[-ǧW(eB@No)+xPKtVV I !B57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/action/action.yamlYmo~B6zW}(P\ASpEP(KH+?3v5Iw%q3ϼ̋f3 ̂jsز8݆w!bE>I:Oʭֈo67YDIF0|lgxuXoR(|CzI7Bzfm Q(zyR[!~W~6efx-jՠ?|`|qn9:94}*ޚqtVM3;t`;)Wghh'?O](,\GUIPqw =|,I  [gA/꣱B럫oƒ;g 58V7,㋻t6w>R}vrCpq'tLի& ֚am z^Տ?Q\mT䃐!$ίsAue1F4/f%"~H_Ul_U_ Mة30*]BsGCxonN` ^ LWU ^'V2 ᨚO% w<ضxE,z/%k ^Õk'՟Zv# xOBJ4u? H4&@5 ]@o4KE_@V@vTFC#QXx5Z`#D7xrQrH9?R5Ú/O9uSL:){5;_s}v?Q z/\э ?扠{}.TGȕ:nۈ r륜dy]t"ru^xZKv0bUBiI[֐lRJHp: E|愢&9MQbH>%bhF8- !fn3L Lvb'm3 ɴZ/@1ã"עҟ7&hXn *pŃVθ4wR' qЅhfDC3 PS_ ڋCsZUS {\b'NjuICPF^#’a0\3!tCX4iНj,tCM9nJF`ͩ%`@kF[[Ht&FytOPMSDFzE3—U2=;hP\Vn,w hmbsboX: ^K5f2<&PzyrEs[&66PvVlxF`XRd70FR6nT xNx/3m^3` MN{Yq_jt%o[xmЬ49oy21IJ%&wā{ ߧx;f? fqy5!WkohoհW_GY f$;߾ƽ^Q՛z|#e_iB(cd[P.fźPnht8shz(}w {]-R`lSV6a耦[[ZI̺mPq9!h0 @ h>\uKaTSWcYU #։Gʭ1H[EOJ>Oq 8ȭmP6-FhQ X_昒b3%+R)BM5 Ot,{۱c5ͱes(Z, J^kz NmH@ʼwA3 \7E/ϰ`5 Cú9<7[r("dS:jZx=ݤ3O<-pO6*nt܂tMO\_}.sSo}`̀C2Mx̷k(nVa,gHsȑ2G PKtVV*O657cbec31-a778-4f2e-be6b-7b4dd3ee98d7/data/integers.txt s S022PKtVV}Wc257cbec31-a778-4f2e-be6b-7b4dd3ee98d7/metadata.yamlPKuVVѺ: 257cbec31-a778-4f2e-be6b-7b4dd3ee98d7/checksums.md5PKtVVv 53,157cbec31-a778-4f2e-be6b-7b4dd3ee98d7/VERSIONPKtVV}Wc=57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/metadata.yamlPKtVVoqW =b57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/citations.bibPKtVVv 53757cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/VERSIONPK#SV9δWcl57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/metadata.yamlPK#SV* +l57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/citations.bibPK#SVv 53f357cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/VERSIONPK#SVꌷ 4 q57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/action/action.yamlPKtVV I !B%57cbec31-a778-4f2e-be6b-7b4dd3ee98d7/provenance/action/action.yamlPKtVV*O6057cbec31-a778-4f2e-be6b-7b4dd3ee98d7/data/integers.txtPK -1q2cli-2024.5.0/q2cli/tests/data/parse_dir_test/left_ints.qza000066400000000000000000000333211462552630000235630ustar00rootroot00000000000000PKtVVdiWc2cf5e3665-becd-4500-8fe1-46641ada29dc/metadata.yaml+-LRHN3M5633MJMN5150еHK551331LLI4LI*,HR+ N-,MKN5J/M,A 3r,JM./trPKtVV-o; 2cf5e3665-becd-4500-8fe1-46641ada29dc/checksums.md5j[A ~%4?E6 $є SKh>$8@ J+5{ Ξ{!J ʘo7w1p9Y]Y)>ذY3b&*VT4gaq<_KX!p-ʨPERV/ ]igO=UuԹB%i@Ϛ/z\ҧ*\no[#ڢ67t<6Kī- #{Ѹ E{D )E] HF[YOuF%bbk?PKtVVv 53,cf5e3665-becd-4500-8fe1-46641ada29dc/VERSION uU0J,J,KR0J+JM-/ʶR0202336KO205611PKtVVdiWc=cf5e3665-becd-4500-8fe1-46641ada29dc/provenance/metadata.yaml+-LRHN3M5633MJMN5150еHK551331LLI4LI*,HR+ N-,MKN5J/M,A 3r,JM./trPKtVVoqW =cf5e3665-becd-4500-8fe1-46641ada29dc/provenance/citations.bibYr7+Y )ԋTD/Eڊ؉7S`7ȆZ2cf쳘~lВJ\5g6IJ \\{Wz)R^R F/b}u1^XWQ+;kuήd.L;Kv KXge8[d+*پ8\ge&e}"Iglyi+{J(❶&@R$N[`qhԍQvAvȕ@w|}-^kQ öjlWF [- ǥT+vq6w;>7eBn]ZἐcW3G^X:I K:o#<"N8yrX.NKxf" ;3y0ZBk9Op'=y 'r636OןGiIjWfotBXw  am$Ɯ%} 6xB6F>ʕX5p89 ǩ۲vd tا6gduGצ.J\=a1cw2cJdrW|5N~v56s?O1Kwue4m#{ن^$XZ׌LtiQxvC*Lj`\vB+99mCԈ_H} /9</JkNO<(~4cS/‡W6C B]&7!㾫uQ7 r^|u"=" nMB1{()8Tz1FzJ)W[$$%QVd\=(&}'USP\%;c:/Sy €;bat]r{owFAaN3!;'sE# Qft.4bH_ztۻ$dWBWSgrFiWjgQa9R#>t9JY3,G0aڋLQlrHm{[@B{=U` Eo Ϟ}td,WJ zR\Vؖ[uHDi&bEtBZ`ruw{8hA ] FW 4+XG/=GjW`p{f'U-S9s65]zZr&2lu[]a{,Qc6(?!s+`۷X1`(~Yko4^]~-^\5H-%nG 6ml:i!KjSS|ogOj32bTp{gFj~A0-0'|/}SbVSvI}x [sv_N[M T8bfW7 H\fr׃/f\آ$?Nf fgk5;?W|8,nNmKlp(8 Rs0W%Noӭ{Z63! Wt3M`t息 ](lUNxsr|N3Q;yK65ڨɣ)8Md#/cyNVd-{-/,=7WRt_+9jE2T'c7GrVKx'd ^ !~8]ӱ2- *U ܉Ne= o ZLj@Y[]ΏJ,dpŵsS/i+nڑ)ѡ^t(\٨y>BEc(S9p^x-wQ`TJ`EVޙ(rD:nL]`v|:%Mn⛬0Rt9x\$}+6TpL'52fFƷghN;%׈-9Or NٻF3Dt Q# |k>H= fpDy rT}@ >,k у|rcE;_u32 (s+ƮBՕI%W _Au{ި;1:: G$h`dH{q㡶6,ں. 8׊/ {$dBUSgrFaWjg^a1R#>!u9JY3,3aʋLQrrIm{%7ϼ2.fàEo O/^|TReƗWbU~x #(.,ZXvGh *0P :3ˣ7"?%3$ؔ@.fΐ`pІyp9/#(登;ݖ1'sD06a/^ [6gun{4鏿,G[,O |XUkТ$Y[`pyԡ'G=_IcCJq@mTMUb.shbAtXb-Nhv//kvd+ؙބkKjr.TSr:V&$ mSU8xS?/'چ9Q9K4ڸNDՃ1@[Cd +byNd˦זk4n=]qQ.o>mnd<7G}s@@??lty|Nc3eZ,4T(,[uI{Q!ZeM\'vd- itw 7f8gYxi⭜^yb :SrFft c% =RG_ddO!v\J6nc>YI-e(yb .Qd:#XFl4t𙙢eb:ʊD/R#a'Hم\,Ğ#R&fԤ-/VPK#SVv 53fcf5e3665-becd-4500-8fe1-46641ada29dc/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/VERSION uU0J,J,KR0J+JM-/ʶR0202336KO205611PK#SVꌷ 4 qcf5e3665-becd-4500-8fe1-46641ada29dc/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/action/action.yamlY[ܶ~_EEԲJK&C|o]{ў}o *F_Ug:* ᩆ5?AyE?DH6E}kt &mf-Q<׊gjSQf|VMlz/4":YۛTlJO&Xܓv/zG*7 4FFP*NL n1$0+-"ܚD<(URAZtjerͪڎ7:ZItNtNKV|s; V"Z) gJNe<BJRDTu>gdc@tIUgǻUJ_ZDDZ3N ad53'|hOd2=9n :U֫5=N *u%GTqKTLKբkc(Tgc:YQ(W[&XoIlqSYJ&z+dK|RVz- ̈K`D<Ҩ_#Ҁ,J/=ӆkd"] Xܓt(-| 8y&;e0H qNLqJCTU'%+:2: RRAdjKv"V w&T!tj4OX>e5<Rɀ8V;T˚rp.ӡDp c2Lψ?xn4^_Zpʉu0@4B\^spP=F6pt (-PˈfHbT0H6jQ3/v뇀0J*#3k~к4BG=A2XhT`+͞Llt\De|MO |Tsϥ4 40}+@ꕱC~B-GZ.C4@() dQ4h GVq_ϭt]4G3($FYtj J^]W4^*BϔXtRt0*Vr_ 5X"{lMН571 3"bap{i&wg?]YIf ֚y @-+y^513v|l[B \\ ~@$*ý;rfy3wCt=Ih3-]ЙNSNE!k-Y|`K3<8U@rmS. =g5KznmAP֌.z#’a03!tCH4iП,tC,q>)ysswӌK 5ġ{hKw ()]S Z-:ߧ$;*qW3x6#9É/5(YV:xc̵]hVלoy:1ޣIj9.uā{ϗx7'? fu[L1 Weo? g;ĆIvo(wei i玜Xy~M v0 6r~GD[FT9 {?n4Vf+ `~Z;(ۄ$.nFlj9'#`un Bc=q(Śn@:^2 õ,Ğ4guTV.?:Vd~'_SZ3Prn=9WuqJnoۧl)fk|G%ȇf`v1'eJ~QR 16k?4ѳ֮c4OˡdL.0C({+}@316 د`5 C9<7[s("dS@:fi=˞3/gtبkk*Z#t^k|V4 zT4/2.+[-ǧW(eB@No)+xPKtVVjX !Bcf5e3665-becd-4500-8fe1-46641ada29dc/provenance/action/action.yamlYmo~B6zW}(P\ASpEP(KH+?3v5Iw%q3ϼ̋f3 ̂jsز8݆w!bE>I:Oʭֈo67YDIF0|lgxuXoR(|Cz9DBzfm hFҼPm+ ?23v< ^j>bc7Ui jyQg oM8:+Cӝf:0НhJ`ڟᧀU.xnw{$ cv(Yz>?G-г@Xj!uU7@3ڛE+9m~LZ]A_y!xUkg0L~=/(f6d* AHB??9PΠ:P2r#^Dw gu_3TUb*6b*vb&A-X9!<_k'0BA */PˋycmpT́WlGAlwA͢Mn|=뗏uJ ;ēOsxkڬoêΣM^<Gc:}~U_܇-mrwyj5pٵ0!_n0E8C)WSsB9H+&6ޯ*eY7U#+#68Y̝JƬ^}` Jf${dϘPfi> xOBJ4u? H4&@5 ]@o4KE_@V@vTFC#QXx5Z`#D7xrQrH9?R5Ú/O9uSL:){5;_s}v?Q z/\э ?扠{}.TGȕ:nۈ r륜dy]t"ru^xZKv0bUBiI[֐lRJHp: E|愢&9MQbH>%bhF8- !fn3L Lvb'm3 ɴZ/@1ã"עҟ7&hXn *pŃVθ4wR' qЅhfDC3 PS_ ڋCsZUS {\`'NjuICPF^#’a0\3!tCX4iНj,tCM9nJF`ͩ%`@kF[[Ht&FytOPMSDFzE3—U2=;hP\Vn,w hmbsboX: ^K5f2<&PzyrEs[&66PvVlxF`XRd70FR6nT xNx/3m^3` MN{Yq_jt%o[xmЬ49oy21IJ%&wā{ ߧx;f? fqy5!WkohoհWGY f$;߾ƽ^Q՛z|#e_iB(cd[P.fźPnht8shz(}w {]-R`lSV6a耦[[ZI̺mPq9!h0 @ h>\uKaTSWcYU #։Gʭ1H[EOJ>Oq 8ȭmP6-FhQ X_昒b3%+R)BM5 Ot,{۱c5ͱes(Z, J^kz NmH@ʼwA3 \7E/ϰ`5 Cú9<7[r("dS:jZx=ݤ3O<-pO6*nt܂tMO\_}.sSo}`̀C2Mx̷k(nVa,gHsȑ2G PKtVVd 6cf5e3665-becd-4500-8fe1-46641ada29dc/data/integers.txt s S02PKtVVdiWc2cf5e3665-becd-4500-8fe1-46641ada29dc/metadata.yamlPKtVV-o; 2cf5e3665-becd-4500-8fe1-46641ada29dc/checksums.md5PKtVVv 53,2cf5e3665-becd-4500-8fe1-46641ada29dc/VERSIONPKtVVdiWc=cf5e3665-becd-4500-8fe1-46641ada29dc/provenance/metadata.yamlPKtVVoqW =ccf5e3665-becd-4500-8fe1-46641ada29dc/provenance/citations.bibPKtVVv 537cf5e3665-becd-4500-8fe1-46641ada29dc/provenance/VERSIONPK#SV9δWclcf5e3665-becd-4500-8fe1-46641ada29dc/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/metadata.yamlPK#SV* +lcf5e3665-becd-4500-8fe1-46641ada29dc/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/citations.bibPK#SVv 53f4cf5e3665-becd-4500-8fe1-46641ada29dc/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/VERSIONPK#SVꌷ 4 qcf5e3665-becd-4500-8fe1-46641ada29dc/provenance/artifacts/af2710e0-678a-40fc-9922-1a9cb1c18778/action/action.yamlPKtVVjX !B%cf5e3665-becd-4500-8fe1-46641ada29dc/provenance/action/action.yamlPKtVVd 60cf5e3665-becd-4500-8fe1-46641ada29dc/data/integers.txtPK +1q2cli-2024.5.0/q2cli/tests/test_cache_cli.py000066400000000000000000001116761462552630000204540ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import ast import shutil import os.path import unittest import unittest.mock import tempfile import pkg_resources from click.testing import CliRunner from qiime2.core.testing.type import (IntSequence1, IntSequence2, Mapping, SingleInt) from qiime2.core.testing.util import get_dummy_plugin from qiime2.core.util import load_action_yaml from qiime2.core.cache import Cache from q2cli.commands import RootCommand from q2cli.builtin.tools import tools from q2cli.util import get_default_recycle_pool from qiime2.sdk import Artifact, Visualization, ResultCollection from qiime2.sdk.parallel_config import PARALLEL_CONFIG # What to split the errors raised by intentionally failed pipeline on to get # at the uuids needed for testing FIRST_SPLIT = 'Plugin error from dummy-plugin:\n\n ' SECOND_SPLIT = '\n\nSee above for debug info.' def get_data_path(filename): return pkg_resources.resource_filename('q2cli.tests', 'data/%s' % filename) class TestCacheCli(unittest.TestCase): def setUp(self): get_dummy_plugin() self.runner = CliRunner() self.plugin_command = RootCommand().get_command( ctx=None, name='dummy-plugin') self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.cache = Cache(os.path.join(self.tempdir, 'new_cache')) self.art1 = Artifact.import_data(IntSequence1, [0, 1, 2]) self.art2 = Artifact.import_data(IntSequence1, [3, 4, 5]) self.art3 = Artifact.import_data(IntSequence2, [6, 7, 8]) self.art4 = Artifact.import_data(SingleInt, 0) self.art5 = Artifact.import_data(SingleInt, 1) self.ints1 = {'1': self.art4, '2': self.art5} self.ints2 = {'1': self.art1, '2': self.art2} self.mapping = Artifact.import_data(Mapping, {'a': '1', 'b': '2'}) self.metadata = os.path.join(self.tempdir, 'metadata.tsv') with open(self.metadata, 'w') as fh: fh.write('#SampleID\tcol1\n0\tfoo\nid1\tbar\n') self.non_cache_output = os.path.join(self.tempdir, 'output.qza') self.art3_non_cache = os.path.join(self.tempdir, 'art3.qza') # Ensure default state prior to test PARALLEL_CONFIG.parallel_config = None PARALLEL_CONFIG.action_executor_mapping = {} def tearDown(self): shutil.rmtree(self.tempdir) def _run_command(self, *args): return self.runner.invoke(self.plugin_command, args) def test_inputs_from_cache(self): self.cache.save(self.art1, 'art1') self.cache.save(self.art2, 'art2') self.cache.save(self.art3, 'art3') art1_path = str(self.cache.path) + ':art1' art2_path = str(self.cache.path) + ':art2' art3_path = str(self.cache.path) + ':art3' result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', art3_path, '--p-int1', '9', '--p-int2', '10', '--o-concatenated-ints', self.non_cache_output, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(self.non_cache_output).view(list), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) def test_inputs_split(self): self.cache.save(self.art1, 'art1') self.cache.save(self.art2, 'art2') self.art3.save(self.art3_non_cache) art1_path = str(self.cache.path) + ':art1' art2_path = str(self.cache.path) + ':art2' result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', self.art3_non_cache, '--p-int1', '9', '--p-int2', '10', '--o-concatenated-ints', self.non_cache_output, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(self.non_cache_output).view(list), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) def test_colon_in_input_path_not_cache(self): art_path = os.path.join(self.tempdir, 'art:1.qza') self.art1.save(art_path) left_path = os.path.join(self.tempdir, 'left.qza') right_path = os.path.join(self.tempdir, 'right.qza') result = self._run_command( 'split-ints', '--i-ints', art_path, '--o-left', left_path, '--o-right', right_path, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(left_path).view(list), [0]) self.assertEqual(Artifact.load(right_path).view(list), [1, 2]) def test_colon_in_cache_path(self): cache = Cache(os.path.join(self.tempdir, 'new:cache')) cache.save(self.art1, 'art') art_path = str(cache.path) + ':art' left_path = os.path.join(self.tempdir, 'left.qza') right_path = os.path.join(self.tempdir, 'right.qza') result = self._run_command( 'split-ints', '--i-ints', art_path, '--o-left', left_path, '--o-right', right_path, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(left_path).view(list), [0]) self.assertEqual(Artifact.load(right_path).view(list), [1, 2]) def test_output_to_cache(self): self.cache.save(self.art1, 'art1') self.cache.save(self.art2, 'art2') self.cache.save(self.art3, 'art3') art1_path = str(self.cache.path) + ':art1' art2_path = str(self.cache.path) + ':art2' art3_path = str(self.cache.path) + ':art3' out_path = str(self.cache.path) + ':out' result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', art3_path, '--p-int1', '9', '--p-int2', '10', '--o-concatenated-ints', out_path, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(self.cache.load('out').view(list), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) def test_outputs_to_cache(self): self.cache.save(self.art1, 'art1') art1_path = str(self.cache.path) + ':art1' left_path = str(self.cache.path) + ':left' right_path = str(self.cache.path) + ':right' result = self._run_command( 'split-ints', '--i-ints', art1_path, '--o-left', left_path, '--o-right', right_path, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(self.cache.load('left').view(list), [0]) self.assertEqual(self.cache.load('right').view(list), [1, 2]) def test_outputs_split(self): self.cache.save(self.art1, 'art1') art1_path = str(self.cache.path) + ':art1' left_path = str(self.cache.path) + ':left' result = self._run_command( 'split-ints', '--i-ints', art1_path, '--o-left', left_path, '--o-right', self.non_cache_output, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(self.cache.load('left').view(list), [0]) self.assertEqual(Artifact.load(self.non_cache_output).view(list), [1, 2]) def test_invalid_cache_path_input(self): art1_path = 'not_a_cache:art1' left_path = str(self.cache.path) + ':left' right_path = str(self.cache.path) + ':right' result = self._run_command( 'split-ints', '--i-ints', art1_path, '--o-left', left_path, '--o-right', right_path, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertRegex(result.output, r"cache") def test_invalid_cache_path_output(self): self.cache.save(self.art1, 'art1') art1_path = str(self.cache.path) + ':art1' left_path = '/this/is/not_a_cache:left' right_path = str(self.cache.path) + ':right' result = self._run_command( 'split-ints', '--i-ints', art1_path, '--o-left', left_path, '--o-right', right_path, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('does not exist', result.output) def test_colon_in_out_path_not_cache(self): self.cache.save(self.art1, 'art1') self.cache.save(self.art2, 'art2') self.cache.save(self.art3, 'art3') art1_path = str(self.cache.path) + ':art1' art2_path = str(self.cache.path) + ':art2' art3_path = str(self.cache.path) + ':art3' out_path = os.path.join(self.tempdir, 'out:put.qza') result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', art3_path, '--p-int1', '9', '--p-int2', '10', '--o-concatenated-ints', out_path, '--verbose' ) self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(out_path).view(list), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) def test_collection_roundtrip_list(self): key1 = 'out1' key2 = 'out2' collection_out1 = str(self.cache.path) + ':' + key1 collection_out2 = str(self.cache.path) + ':' + key2 result = self._run_command( 'list-params', '--p-ints', '0', '--p-ints', '1', '--o-output', collection_out1, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection(key1) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) result = self._run_command( 'list-of-ints', '--i-ints', collection_out1, '--o-output', collection_out2, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection(key2) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_collection_roundtrip_dict_keyed(self): key1 = 'out1' key2 = 'out2' collection_out1 = str(self.cache.path) + ':' + key1 collection_out2 = str(self.cache.path) + ':' + key2 result = self._run_command( 'dict-params', '--p-ints', 'foo:0', '--p-ints', 'bar:1', '--o-output', collection_out1, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection(key1) self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) result = self._run_command( 'dict-of-ints', '--i-ints', collection_out1, '--o-output', collection_out2, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection(key2) self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) def test_collection_roundtrip_dict_unkeyed(self): key1 = 'out1' key2 = 'out2' collection_out1 = str(self.cache.path) + ':' + key1 collection_out2 = str(self.cache.path) + ':' + key2 result = self._run_command( 'dict-params', '--p-ints', '0', '--p-ints', '1', '--o-output', collection_out1, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection(key1) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) result = self._run_command( 'dict-of-ints', '--i-ints', collection_out1, '--o-output', collection_out2, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection(key2) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_de_facto_list(self): self.cache.save(self.art4, 'art4') self.cache.save(self.art5, 'art5') art4_path = str(self.cache.path) + ':art4' art5_path = str(self.cache.path) + ':art5' output = str(self.cache.path) + ':output' result = self._run_command( 'list-of-ints', '--i-ints', art4_path, '--i-ints', art5_path, '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection('output') self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_de_facto_dict_keyed(self): self.cache.save(self.art4, 'art4') self.cache.save(self.art5, 'art5') art4_path = str(self.cache.path) + ':art4' art5_path = str(self.cache.path) + ':art5' output = str(self.cache.path) + ':output' result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{art4_path}', '--i-ints', f'bar:{art5_path}', '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection('output') self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) def test_de_facto_dict_unkeyed(self): self.cache.save(self.art4, 'art4') self.cache.save(self.art5, 'art5') art4_path = str(self.cache.path) + ':art4' art5_path = str(self.cache.path) + ':art5' output = str(self.cache.path) + ':output' result = self._run_command( 'dict-of-ints', '--i-ints', art4_path, '--i-ints', art5_path, '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection('output') self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_mixed_cached_uncached_inputs(self): art4_path = os.path.join(self.tempdir, 'art4.qza') self.art4.save(art4_path) self.cache.save(self.art5, 'art5') art5_path = str(self.cache.path) + ':art5' output = str(self.cache.path) + ':output' result = self._run_command( 'dict-of-ints', '--i-ints', art4_path, '--i-ints', art5_path, '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection('output') self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) self.cache.remove('output') result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{art4_path}', '--i-ints', f'bar:{art5_path}', '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = self.cache.load_collection('output') self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) def test_pipeline_resumption_default(self): plugin_action = 'dummy_plugin_resumable_varied_pipeline' default_pool = get_default_recycle_pool(plugin_action) default_pool_fp = os.path.join(self.cache.pools, default_pool) output = os.path.join(self.tempdir, 'output') self.cache.save_collection(self.ints1, 'ints1') self.cache.save_collection(self.ints2, 'ints2') self.cache.save(self.art4, 'int1') ints1_path = str(self.cache.path) + ':ints1' ints2_path = str(self.cache.path) + ':ints2' int1_path = str(self.cache.path) + ':int1' result = self._run_command( 'resumable-varied-pipeline', '--i-ints1', ints1_path, '--i-ints2', ints2_path, '--i-int1', int1_path, '--p-string', 'Hi', '--m-metadata-file', self.metadata, '--p-fail', 'True', '--output-dir', output, '--use-cache', str(self.cache.path), '--verbose' ) self.assertEqual(result.exit_code, 1) # Assert that the pool exists self.assertTrue(os.path.exists(default_pool_fp)) exception = result.output.split(FIRST_SPLIT)[-1] exception = exception.split(SECOND_SPLIT)[0] ints1_uuids, ints2_uuids, int1_uuid, list_uuids, dict_uuids, \ identity_uuid, viz_uuid = ast.literal_eval(exception) result = self._run_command( 'resumable-varied-pipeline', '--i-ints1', ints1_path, '--i-ints2', ints2_path, '--i-int1', int1_path, '--p-string', 'Hi', '--m-metadata-file', self.metadata, '--output-dir', output, '--use-cache', str(self.cache.path), '--verbose' ) self.assertEqual(result.exit_code, 0) ints1_ret_fp = os.path.join(output, 'ints1_ret') ints2_ret_fp = os.path.join(output, 'ints2_ret') int1_ret_fp = os.path.join(output, 'int1_ret.qza') list_ret_fp = os.path.join(output, 'list_ret') dict_ret_fp = os.path.join(output, 'dict_ret') identity_ret_fp = os.path.join(output, 'identity_ret.qza') viz_ret_fp = os.path.join(output, 'viz.qzv') ints1_ret = ResultCollection.load(ints1_ret_fp) ints2_ret = ResultCollection.load(ints2_ret_fp) int1_ret = Artifact.load(int1_ret_fp) list_ret = ResultCollection.load(list_ret_fp) dict_ret = ResultCollection.load(dict_ret_fp) identity_ret = Artifact.load(identity_ret_fp) viz_ret = Visualization.load(viz_ret_fp) complete_ints1_uuids = self._load_alias_uuids(ints1_ret) complete_ints2_uuids = self._load_alias_uuids(ints2_ret) complete_int1_uuid = self._load_alias_uuid(int1_ret) complete_list_uuids = self._load_alias_uuids(list_ret) complete_dict_uuids = self._load_alias_uuids(dict_ret) complete_identity_uuid = self._load_alias_uuid(identity_ret) complete_viz_uuid = self._load_alias_uuid(viz_ret) # Assert that the artifacts returned by the completed run of the # pipeline are aliases of the artifacts created by the first failed run self.assertEqual(ints1_uuids, complete_ints1_uuids) self.assertEqual(ints2_uuids, complete_ints2_uuids) self.assertEqual(int1_uuid, complete_int1_uuid) self.assertEqual(list_uuids, complete_list_uuids) self.assertEqual(dict_uuids, complete_dict_uuids) self.assertEqual(identity_uuid, complete_identity_uuid) self.assertEqual(viz_uuid, complete_viz_uuid) # Assert that the pool was removed self.assertFalse(os.path.exists(default_pool_fp)) def test_pipeline_resumption_different_pool(self): pool = 'pool' pool_fp = os.path.join(self.cache.pools, pool) output = os.path.join(self.tempdir, 'output') self.cache.save_collection(self.ints1, 'ints1') self.cache.save_collection(self.ints2, 'ints2') self.cache.save(self.art4, 'int1') ints1_path = str(self.cache.path) + ':ints1' ints2_path = str(self.cache.path) + ':ints2' int1_path = str(self.cache.path) + ':int1' result = self._run_command( 'resumable-varied-pipeline', '--i-ints1', ints1_path, '--i-ints2', ints2_path, '--i-int1', int1_path, '--p-string', 'Hi', '--m-metadata-file', self.metadata, '--p-fail', 'True', '--output-dir', output, '--recycle-pool', pool, '--use-cache', str(self.cache.path), '--verbose' ) self.assertEqual(result.exit_code, 1) # Assert that the pool exists self.assertTrue(os.path.exists(pool_fp)) exception = result.output.split(FIRST_SPLIT)[-1] exception = exception.split(SECOND_SPLIT)[0] ints1_uuids, ints2_uuids, int1_uuid, list_uuids, dict_uuids, \ identity_uuid, viz_uuid = ast.literal_eval(exception) result = self._run_command( 'resumable-varied-pipeline', '--i-ints1', ints1_path, '--i-ints2', ints2_path, '--i-int1', int1_path, '--p-string', 'Hi', '--m-metadata-file', self.metadata, '--output-dir', output, '--recycle-pool', pool, '--use-cache', str(self.cache.path), '--verbose' ) self.assertEqual(result.exit_code, 0) ints1_ret_fp = os.path.join(output, 'ints1_ret') ints2_ret_fp = os.path.join(output, 'ints2_ret') int1_ret_fp = os.path.join(output, 'int1_ret.qza') list_ret_fp = os.path.join(output, 'list_ret') dict_ret_fp = os.path.join(output, 'dict_ret') identity_ret_fp = os.path.join(output, 'identity_ret.qza') viz_ret_fp = os.path.join(output, 'viz.qzv') ints1_ret = ResultCollection.load(ints1_ret_fp) ints2_ret = ResultCollection.load(ints2_ret_fp) int1_ret = Artifact.load(int1_ret_fp) list_ret = ResultCollection.load(list_ret_fp) dict_ret = ResultCollection.load(dict_ret_fp) identity_ret = Artifact.load(identity_ret_fp) viz_ret = Visualization.load(viz_ret_fp) complete_ints1_uuids = self._load_alias_uuids(ints1_ret) complete_ints2_uuids = self._load_alias_uuids(ints2_ret) complete_int1_uuid = self._load_alias_uuid(int1_ret) complete_list_uuids = self._load_alias_uuids(list_ret) complete_dict_uuids = self._load_alias_uuids(dict_ret) complete_identity_uuid = self._load_alias_uuid(identity_ret) complete_viz_uuid = self._load_alias_uuid(viz_ret) # Assert that the artifacts returned by the completed run of the # pipeline are aliases of the artifacts created by the first failed run self.assertEqual(ints1_uuids, complete_ints1_uuids) self.assertEqual(ints2_uuids, complete_ints2_uuids) self.assertEqual(int1_uuid, complete_int1_uuid) self.assertEqual(list_uuids, complete_list_uuids) self.assertEqual(dict_uuids, complete_dict_uuids) self.assertEqual(identity_uuid, complete_identity_uuid) self.assertEqual(viz_uuid, complete_viz_uuid) # Assert that the pool is still there self.assertTrue(os.path.exists(pool_fp)) def test_pipeline_resumption_no_recycle(self): plugin_action = 'dummy_plugin_resumable_varied_pipeline' default_pool = get_default_recycle_pool(plugin_action) default_pool_fp = os.path.join(self.cache.pools, default_pool) output = os.path.join(self.tempdir, 'output') self.cache.save_collection(self.ints1, 'ints1') self.cache.save_collection(self.ints2, 'ints2') self.cache.save(self.art4, 'int1') ints1_path = str(self.cache.path) + ':ints1' ints2_path = str(self.cache.path) + ':ints2' int1_path = str(self.cache.path) + ':int1' result = self._run_command( 'resumable-varied-pipeline', '--i-ints1', ints1_path, '--i-ints2', ints2_path, '--i-int1', int1_path, '--p-string', 'Hi', '--m-metadata-file', self.metadata, '--p-fail', 'True', '--output-dir', output, '--use-cache', str(self.cache.path), '--no-recycle', '--verbose' ) self.assertEqual(result.exit_code, 1) # Assert that the pool was not created self.assertFalse(os.path.exists(default_pool_fp)) exception = result.output.split(FIRST_SPLIT)[-1] exception = exception.split(SECOND_SPLIT)[0] ints1_uuids, ints2_uuids, int1_uuid, list_uuids, dict_uuids, \ identity_uuid, viz_uuid = ast.literal_eval(exception) result = self._run_command( 'resumable-varied-pipeline', '--i-ints1', ints1_path, '--i-ints2', ints2_path, '--i-int1', int1_path, '--p-string', 'Hi', '--m-metadata-file', self.metadata, '--output-dir', output, '--use-cache', str(self.cache.path), '--verbose' ) self.assertEqual(result.exit_code, 0) ints1_ret_fp = os.path.join(output, 'ints1_ret') ints2_ret_fp = os.path.join(output, 'ints2_ret') int1_ret_fp = os.path.join(output, 'int1_ret.qza') list_ret_fp = os.path.join(output, 'list_ret') dict_ret_fp = os.path.join(output, 'dict_ret') identity_ret_fp = os.path.join(output, 'identity_ret.qza') viz_ret_fp = os.path.join(output, 'viz.qzv') ints1_ret = ResultCollection.load(ints1_ret_fp) ints2_ret = ResultCollection.load(ints2_ret_fp) int1_ret = Artifact.load(int1_ret_fp) list_ret = ResultCollection.load(list_ret_fp) dict_ret = ResultCollection.load(dict_ret_fp) identity_ret = Artifact.load(identity_ret_fp) viz_ret = Visualization.load(viz_ret_fp) complete_ints1_uuids = self._load_alias_uuids(ints1_ret) complete_ints2_uuids = self._load_alias_uuids(ints2_ret) complete_int1_uuid = self._load_alias_uuid(int1_ret) complete_list_uuids = self._load_alias_uuids(list_ret) complete_dict_uuids = self._load_alias_uuids(dict_ret) complete_identity_uuid = self._load_alias_uuid(identity_ret) complete_viz_uuid = self._load_alias_uuid(viz_ret) # Assert that the artifacts returned by the completed run of the # pipeline are aliases of the artifacts created by the first failed run self.assertNotEqual(ints1_uuids, complete_ints1_uuids) self.assertNotEqual(ints2_uuids, complete_ints2_uuids) self.assertNotEqual(int1_uuid, complete_int1_uuid) self.assertNotEqual(list_uuids, complete_list_uuids) self.assertNotEqual(dict_uuids, complete_dict_uuids) self.assertNotEqual(identity_uuid, complete_identity_uuid) self.assertNotEqual(viz_uuid, complete_viz_uuid) # Assert that the pool was removed self.assertFalse(os.path.exists(default_pool_fp)) def test_mixed_keyed_unkeyed_inputs(self): art4_uncached_path = os.path.join(self.tempdir, 'art4.qza') self.art4.save(art4_uncached_path) self.cache.save(self.art4, 'art4') self.cache.save(self.art5, 'art5') art4_path = str(self.cache.path) + ':art4' art5_path = str(self.cache.path) + ':art5' output = str(self.cache.path) + ':output' result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{art4_path}', '--i-ints', art5_path, '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('Keyed values cannot be mixed with unkeyed values.', str(result.exception)) result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{art4_uncached_path}', '--i-ints', art5_path, '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('Keyed values cannot be mixed with unkeyed values.', str(result.exception)) result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{art4_path}', '--i-ints', art4_uncached_path, '--o-output', output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('Keyed values cannot be mixed with unkeyed values.', str(result.exception)) def test_nonexistent_input_key(self): art1_path = str(self.cache.path) + ':art1' left_path = str(self.cache.path) + ':left' result = self._run_command( 'split-ints', '--i-ints', art1_path, '--o-left', left_path, '--o-right', self.non_cache_output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn("does not contain the key 'art1'", str(result.output)) def test_output_key_invalid(self): self.cache.save(self.art1, 'art1') self.cache.save(self.art2, 'art2') self.cache.save(self.art3, 'art3') art1_path = str(self.cache.path) + ':art1' art2_path = str(self.cache.path) + ':art2' art3_path = str(self.cache.path) + ':art3' invalid = 'not_valid_identifier$&;' out_path = str(self.cache.path) + ':' + invalid result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', art3_path, '--p-int1', '9', '--p-int2', '10', '--o-concatenated-ints', out_path, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn(f"Key '{invalid}' is not a valid Python identifier", str(result.exception)) def test_artifact_as_metadata_cache(self): self.cache.save(self.mapping, 'mapping') mapping_path = str(self.cache.path) + ':mapping' result = self.runner.invoke(tools, ['inspect-metadata', mapping_path]) self.assertEqual(result.exit_code, 0) self.assertIn('COLUMN NAME TYPE', result.output) self.assertIn("=========== ===========", result.output) self.assertIn("a categorical", result.output) self.assertIn("b categorical", result.output) self.assertIn("IDS: 1", result.output) self.assertIn("COLUMNS: 2", result.output) def test_artifact_as_metadata_cache_bad_key(self): mapping_path = str(self.cache.path) + ':mapping' result = self.runner.invoke(tools, ['inspect-metadata', mapping_path]) self.assertEqual(result.exit_code, 1) self.assertIn("does not contain the key 'mapping'", result.output) def test_artifact_as_metadata_cache_bad_cache(self): result = self.runner.invoke( tools, ['inspect-metadata', 'not_a_cache:key']) self.assertEqual(result.exit_code, 1) self.assertIn('is not a valid cache', result.output) def test_output_dir_as_cache(self): self.cache.save(self.art1, 'art1') self.cache.save(self.art2, 'art2') self.cache.save(self.art3, 'art3') art1_path = str(self.cache.path) + ':art1' art2_path = str(self.cache.path) + ':art2' art3_path = str(self.cache.path) + ':art3' out_path = str(self.cache.path) + ':out' result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', art3_path, '--p-int1', '9', '--p-int2', '10', '--output-dir', out_path, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn( 'Cache keys cannot be used as output dirs.', str(result.exception)) def test_parallel(self): output = os.path.join(self.tempdir, 'output') self.cache.save_collection(self.ints1, 'ints1') ints1_path = str(self.cache.path) + ':ints1' result = self._run_command( 'resumable-pipeline', '--i-int-list', ints1_path, '--i-int-dict', ints1_path, '--output-dir', output, '--use-cache', str(self.cache.path), '--verbose', '--parallel' ) self.assertEqual(result.exit_code, 0) list_return = ResultCollection.load( os.path.join(output, 'list_return')) dict_return = ResultCollection.load( os.path.join(output, 'dict_return')) list_execution_contexts = self._load_alias_execution_contexts( list_return) dict_execution_contexts = self._load_alias_execution_contexts( dict_return) expected = [{ 'type': 'parsl', 'parsl_type': 'ThreadPoolExecutor'}, { 'type': 'parsl', 'parsl_type': 'ThreadPoolExecutor' }] self.assertEqual(list_execution_contexts, expected) self.assertEqual(dict_execution_contexts, expected) def test_config_parallel(self): output = os.path.join(self.tempdir, 'output') self.cache.save_collection(self.ints1, 'ints1') ints1_path = str(self.cache.path) + ':ints1' config_path = get_data_path('mapping_config.toml') result = self._run_command( 'resumable-pipeline', '--i-int-list', ints1_path, '--i-int-dict', ints1_path, '--output-dir', output, '--use-cache', str(self.cache.path), '--verbose', '--parallel-config', config_path ) self.assertEqual(result.exit_code, 0) list_return = ResultCollection.load( os.path.join(output, 'list_return')) dict_return = ResultCollection.load( os.path.join(output, 'dict_return')) list_execution_contexts = self._load_alias_execution_contexts( list_return) dict_execution_contexts = self._load_alias_execution_contexts( dict_return) list_expected = [{ 'type': 'parsl', 'parsl_type': '_TEST_EXECUTOR_'}, { 'type': 'parsl', 'parsl_type': '_TEST_EXECUTOR_' }] dict_expected = [{ 'type': 'parsl', 'parsl_type': 'ThreadPoolExecutor'}, { 'type': 'parsl', 'parsl_type': 'ThreadPoolExecutor' }] self.assertEqual(list_execution_contexts, list_expected) self.assertEqual(dict_execution_contexts, dict_expected) def test_both_parallel_flags(self): output = os.path.join(self.tempdir, 'output') self.cache.save_collection(self.ints1, 'ints1') ints1_path = str(self.cache.path) + ':ints1' config_path = get_data_path('mapping_config.toml') result = self._run_command( 'resumable-pipeline', '--i-int-list', ints1_path, '--i-int-dict', ints1_path, '--output-dir', output, '--use-cache', str(self.cache.path), '--verbose', '--parallel', '--parallel-config', config_path ) self.assertEqual(result.exit_code, 0) list_return = ResultCollection.load( os.path.join(output, 'list_return')) dict_return = ResultCollection.load( os.path.join(output, 'dict_return')) list_execution_contexts = self._load_alias_execution_contexts( list_return) dict_execution_contexts = self._load_alias_execution_contexts( dict_return) # The explicit config should override the default list_expected = [{ 'type': 'parsl', 'parsl_type': '_TEST_EXECUTOR_'}, { 'type': 'parsl', 'parsl_type': '_TEST_EXECUTOR_' }] dict_expected = [{ 'type': 'parsl', 'parsl_type': 'ThreadPoolExecutor'}, { 'type': 'parsl', 'parsl_type': 'ThreadPoolExecutor' }] self.assertEqual(list_execution_contexts, list_expected) self.assertEqual(dict_execution_contexts, dict_expected) def test_parallel_flags_on_non_pipeline(self): self.cache.save(self.art1, 'art1') self.cache.save(self.art2, 'art2') self.cache.save(self.art3, 'art3') art1_path = str(self.cache.path) + ':art1' art2_path = str(self.cache.path) + ':art2' art3_path = str(self.cache.path) + ':art3' output = str(self.cache.path) + ':output' result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', art3_path, '--p-int1', '9', '--p-int2', '10', '--o-concatenated-ints', output, '--verbose', '--parallel' ) self.assertEqual(result.exit_code, 1) self.assertIn('No such option: --parallel', result.output) config_path = get_data_path('mapping_config.toml') result = self._run_command( 'concatenate-ints', '--i-ints1', art1_path, '--i-ints2', art2_path, '--i-ints3', art3_path, '--p-int1', '9', '--p-int2', '10', '--o-concatenated-ints', output, '--verbose', '--parallel-config', config_path ) self.assertEqual(result.exit_code, 1) self.assertIn('No such option: --parallel-config', result.output) def _load_alias_execution_contexts(self, collection): execution_contexts = [] for result in collection.values(): alias_uuid = load_action_yaml( result._archiver.path)['action']['alias-of'] execution_contexts.append(load_action_yaml( self.cache.data / alias_uuid) ['execution']['execution_context']) return execution_contexts def _load_alias_uuid(self, result): return load_action_yaml(result._archiver.path)['action']['alias-of'] def _load_alias_uuids(self, collection): uuids = [] for artifact in collection.values(): uuids.append(load_action_yaml( artifact._archiver.path)['action']['alias-of']) return uuids if __name__ == "__main__": unittest.main() q2cli-2024.5.0/q2cli/tests/test_cli.py000066400000000000000000001214521462552630000173220ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os.path import unittest import contextlib import unittest.mock import tempfile import shutil import click import errno from click.testing import CliRunner from qiime2.core.cache import get_cache from qiime2.core.testing.type import IntSequence1, IntSequence2, SingleInt from qiime2.core.testing.util import get_dummy_plugin from qiime2.sdk import Artifact, Visualization, ResultCollection from q2cli.builtin.info import info from q2cli.builtin.tools import tools from q2cli.commands import RootCommand from q2cli.click.type import QIIME2Type class CliTests(unittest.TestCase): def setUp(self): get_dummy_plugin() self.runner = CliRunner() self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.artifact1_path = os.path.join(self.tempdir, 'a1.qza') self.mapping_path = os.path.join(self.tempdir, 'mapping.qza') artifact1 = Artifact.import_data(IntSequence1, [0, 42, 43]) artifact1.save(self.artifact1_path) self.artifact1_root_dir = str(artifact1.uuid) mapping = Artifact.import_data('Mapping', {'foo': '42'}) mapping.save(self.mapping_path) def tearDown(self): shutil.rmtree(self.tempdir) def test_info(self): result = self.runner.invoke(info) self.assertEqual(result.exit_code, 0) # May not always start with "System versions" if cache updating message # is printed. self.assertIn('System versions', result.output) self.assertIn('Installed plugins', result.output) self.assertIn('dummy-plugin', result.output) self.assertIn('other-plugin', result.output) def test_list_commands(self): # top level commands, including a plugin, are present qiime_cli = RootCommand() commands = qiime_cli.list_commands(ctx=None) self.assertIn('info', commands) self.assertIn('tools', commands) self.assertIn('dummy-plugin', commands) self.assertIn('other-plugin', commands) def test_plugin_list_commands(self): # plugin commands are present including a method and visualizer qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') commands = command.list_commands(ctx=None) self.assertIn('split-ints', commands) self.assertIn('mapping-viz', commands) self.assertNotIn('split_ints', commands) self.assertNotIn('mapping_viz', commands) self.assertNotIn('-underscore-method', commands) self.assertNotIn('_underscore-method', commands) def test_plugin_list_hidden_commands(self): # plugin commands are present including a method and visualizer and # hidden method qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') commands = self.runner.invoke(command, ['--show-hidden-actions']).output self.assertIn('split-ints', commands) self.assertIn('mapping-viz', commands) self.assertIn('_underscore-method', commands) self.assertNotIn('split_ints', commands) self.assertNotIn('mapping_viz', commands) self.assertNotIn('_underscore_method', commands) self.assertNotIn('-underscore-method', commands) def test_action_parameter_types(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') results = self.runner.invoke(command, ['typical-pipeline', '--help']) help_text = results.output # Check the help text to make sure the types are displayed correctly # boolean primitive self.assertIn('--p-do-extra-thing / --p-no-do-extra-thing', help_text) # int primitive self.assertIn('--p-add INTEGER', help_text) # Run it to make sure the types are converted correctly, the framework # will error if it recieves the wrong type from the interface. output_dir = os.path.join(self.tempdir, 'output-test') result = self.runner.invoke(command, [ 'typical-pipeline', '--i-int-sequence', self.artifact1_path, '--i-mapping', self.mapping_path, '--p-do-extra-thing', '--p-add', '10', '--output-dir', output_dir, '--verbose']) self.assertEqual(result.exit_code, 0) def test_execute_hidden_action(self): int_path = os.path.join(self.tempdir, 'int.qza') qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') result = self.runner.invoke( command, ['_underscore-method', '--o-int', int_path, '--verbose']) self.assertEqual(result.exit_code, 0) def test_extract(self): result = self.runner.invoke( tools, ['extract', '--input-path', self.artifact1_path, '--output-path', self.tempdir]) # command completes sucessfully and creates the correct # output directory self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists( os.path.join(self.tempdir, self.artifact1_root_dir))) # results are correct data_f = open(os.path.join(self.tempdir, self.artifact1_root_dir, 'data', 'ints.txt')) self.assertEqual(data_f.read(), "0\n42\n43\n") def test_validate_min(self): result = self.runner.invoke( tools, ['validate', self.artifact1_path, '--level', 'min']) self.assertEqual(result.exit_code, 0) self.assertIn('appears to be valid at level=min', result.output) def test_validate_max(self): result = self.runner.invoke( tools, ['validate', self.artifact1_path, '--level', 'max']) self.assertEqual(result.exit_code, 0) self.assertIn('appears to be valid at level=max', result.output) result = self.runner.invoke(tools, ['validate', self.artifact1_path]) self.assertEqual(result.exit_code, 0) self.assertIn('appears to be valid at level=max', result.output) def test_split_ints(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') # build output file names left_path = os.path.join(self.tempdir, 'left.qza') right_path = os.path.join(self.tempdir, 'right.qza') # TODO: currently must pass `--verbose` to commands invoked by Click's # test runner because redirecting stdout/stderr raises an # "io.UnsupportedOperation: fileno" error. Likely related to Click # mocking a filesystem in the test runner. result = self.runner.invoke( command, ['split-ints', '--i-ints', self.artifact1_path, '--o-left', left_path, '--o-right', right_path, '--verbose']) # command completes successfully and creates the correct # output files self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(left_path)) self.assertTrue(os.path.exists(right_path)) # results are correct left = Artifact.load(left_path) right = Artifact.load(right_path) self.assertEqual(left.view(list), [0]) self.assertEqual(right.view(list), [42, 43]) def test_variadic_inputs(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') output_path = os.path.join(self.tempdir, 'output.qza') ints1 = Artifact.import_data('IntSequence1', [1, 2, 3]).save( os.path.join(self.tempdir, 'ints1.qza')) ints2 = Artifact.import_data('IntSequence2', [4, 5, 6]).save( os.path.join(self.tempdir, 'ints2.qza')) set1 = Artifact.import_data('SingleInt', 7).save( os.path.join(self.tempdir, 'set1.qza')) set2 = Artifact.import_data('SingleInt', 8).save( os.path.join(self.tempdir, 'set2.qza')) result = self.runner.invoke( command, ['variadic-input-method', '--i-ints', ints1, '--i-ints', ints2, '--i-int-set', set1, '--i-int-set', set2, '--p-nums', '9', '--p-nums', '10', '--p-opt-nums', '11', '--p-opt-nums', '12', '--p-opt-nums', '13', '--o-output', output_path, '--verbose']) self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(output_path)) output = Artifact.load(output_path) self.assertEqual(output.view(list), list(range(1, 14))) def test_with_parameters_only(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') output_path = os.path.join(self.tempdir, 'output.qza') result = self.runner.invoke( command, ['params-only-method', '--p-name', 'Peanut', '--p-age', '42', '--o-out', output_path, '--verbose']) self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(output_path)) artifact = Artifact.load(output_path) self.assertEqual(artifact.view(dict), {'Peanut': '42'}) def test_without_inputs_or_parameters(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') output_path = os.path.join(self.tempdir, 'output.qza') result = self.runner.invoke( command, ['no-input-method', '--o-out', output_path, '--verbose']) self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(output_path)) artifact = Artifact.load(output_path) self.assertEqual(artifact.view(dict), {'foo': '42'}) def test_qza_extension(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') # build output parameter arguments and expected output file names left_path = os.path.join(self.tempdir, 'left') expected_left_path = os.path.join(self.tempdir, 'left.qza') right_path = os.path.join(self.tempdir, 'right') expected_right_path = os.path.join(self.tempdir, 'right.qza') result = self.runner.invoke( command, ['split-ints', '--i-ints', self.artifact1_path, '--o-left', left_path, '--o-right', right_path, '--verbose']) # command completes successfully and creates the correct # output files self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(expected_left_path)) self.assertTrue(os.path.exists(expected_right_path)) # results are correct left = Artifact.load(expected_left_path) right = Artifact.load(expected_right_path) self.assertEqual(left.view(list), [0]) self.assertEqual(right.view(list), [42, 43]) def test_qzv_extension(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') # build output parameter arguments and expected output file names viz_path = os.path.join(self.tempdir, 'viz') expected_viz_path = os.path.join(self.tempdir, 'viz.qzv') result = self.runner.invoke( command, ['most-common-viz', '--i-ints', self.artifact1_path, '--o-visualization', viz_path, '--verbose']) # command completes successfully and creates the correct # output file self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(expected_viz_path)) # Visualization contains expected contents viz = Visualization.load(expected_viz_path) self.assertEqual(viz.get_index_paths(), {'html': 'data/index.html', 'tsv': 'data/index.tsv'}) def test_verbose_shows_stacktrace(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') output = os.path.join(self.tempdir, 'never-happens.qza') result = self.runner.invoke( command, ['failing-pipeline', '--i-int-sequence', self.artifact1_path, '--o-mapping', output, '--p-break-from', 'internal', '--verbose']) self.assertEqual(result.exit_code, 1) self.assertIn('Traceback (most recent call last)', result.output) def test_input_conversion(self): obj = QIIME2Type(IntSequence1.to_ast(), repr(IntSequence1)) with self.assertRaisesRegex(click.exceptions.BadParameter, "x does not exist"): obj._convert_input('x', None, None) # This is to ensure the temp in the regex matches the temp used in the # method under test in type.py temp = str(get_cache().path) with unittest.mock.patch('qiime2.sdk.Result.peek', side_effect=OSError(errno.ENOSPC, 'No space left on ' 'device')): with self.assertRaisesRegex(click.exceptions.BadParameter, f'{temp!r}.*' f'{self.artifact1_path!r}.*' f'{temp!r}'): obj._convert_input(self.artifact1_path, None, None) def test_syntax_error_in_env(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') viz_path = os.path.join(self.tempdir, 'viz') with unittest.mock.patch('qiime2.sdk.Result.peek', side_effect=SyntaxError): result = self.runner.invoke( command, ['most-common-viz', '--i-ints', self.artifact1_path, '--o-visualization', viz_path, '--verbose']) self.assertEqual(result.exit_code, 1) self.assertIn('problem loading', result.output) self.assertIn(self.artifact1_path, result.output) def test_deprecated_help_text(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') result = self.runner.invoke(command, ['deprecated-method', '--help']) self.assertEqual(result.exit_code, 0) self.assertIn('WARNING', result.output) self.assertIn('deprecated', result.output) def test_run_deprecated_gets_warning_msg(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') output_path = os.path.join(self.tempdir, 'output.qza') result = self.runner.invoke( command, ['deprecated-method', '--o-out', output_path, '--verbose']) self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(output_path)) artifact = Artifact.load(output_path) # Just make sure that the command ran as expected self.assertEqual(artifact.view(dict), {'foo': '43'}) self.assertIn('deprecated', result.output) class TestOptionalArtifactSupport(unittest.TestCase): def setUp(self): get_dummy_plugin() self.runner = CliRunner() self.plugin_command = RootCommand().get_command( ctx=None, name='dummy-plugin') self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.ints1 = os.path.join(self.tempdir, 'ints1.qza') Artifact.import_data( IntSequence1, [0, 42, 43], list).save(self.ints1) self.ints2 = os.path.join(self.tempdir, 'ints2.qza') Artifact.import_data( IntSequence1, [99, -22], list).save(self.ints2) self.ints3 = os.path.join(self.tempdir, 'ints3.qza') Artifact.import_data( IntSequence2, [43, 43], list).save(self.ints3) self.output = os.path.join(self.tempdir, 'output.qza') def tearDown(self): shutil.rmtree(self.tempdir) def _run_command(self, *args): return self.runner.invoke(self.plugin_command, args) def test_no_optional_artifacts_provided(self): result = self._run_command( 'optional-artifacts-method', '--i-ints', self.ints1, '--p-num1', '42', '--o-output', self.output, '--verbose') self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(self.output).view(list), [0, 42, 43, 42]) def test_one_optional_artifact_provided(self): result = self._run_command( 'optional-artifacts-method', '--i-ints', self.ints1, '--p-num1', '42', '--i-optional1', self.ints2, '--o-output', self.output, '--verbose') self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(self.output).view(list), [0, 42, 43, 42, 99, -22]) def test_all_optional_artifacts_provided(self): result = self._run_command( 'optional-artifacts-method', '--i-ints', self.ints1, '--p-num1', '42', '--i-optional1', self.ints2, '--i-optional2', self.ints3, '--p-num2', '111', '--o-output', self.output, '--verbose') self.assertEqual(result.exit_code, 0) self.assertEqual(Artifact.load(self.output).view(list), [0, 42, 43, 42, 99, -22, 43, 43, 111]) def test_optional_artifact_type_mismatch(self): result = self._run_command( 'optional-artifacts-method', '--i-ints', self.ints1, '--p-num1', '42', '--i-optional1', self.ints3, '--o-output', self.output, '--verbose') self.assertEqual(result.exit_code, 1) self.assertRegex(str(result.output), 'type IntSequence1.*type IntSequence2.*') class MetadataTestsBase(unittest.TestCase): def setUp(self): get_dummy_plugin() self.runner = CliRunner() self.plugin_command = RootCommand().get_command( ctx=None, name='dummy-plugin') self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.input_artifact = os.path.join(self.tempdir, 'in.qza') Artifact.import_data( IntSequence1, [0, 42, 43], list).save(self.input_artifact) self.output_artifact = os.path.join(self.tempdir, 'out.qza') self.metadata_file1 = os.path.join(self.tempdir, 'metadata1.tsv') with open(self.metadata_file1, 'w') as f: f.write('id\tcol1\n0\tfoo\nid1\tbar\n') self.metadata_file_alt_id_header = os.path.join( self.tempdir, 'metadata-alt-id-header.tsv') with open(self.metadata_file_alt_id_header, 'w') as f: f.write('#SampleID\tcol1\n0\tfoo\nid1\tbar\n') self.metadata_file2 = os.path.join(self.tempdir, 'metadata2.tsv') with open(self.metadata_file2, 'w') as f: f.write('id\tcol2\n0\tbaz\nid1\tbaa\n') self.metadata_file_mixed_types = os.path.join( self.tempdir, 'metadata-mixed-types.tsv') with open(self.metadata_file_mixed_types, 'w') as f: f.write('id\tnumbers\tstrings\nid1\t42\tabc\nid2\t-1.5\tdef\n') self.metadata_artifact = os.path.join(self.tempdir, 'metadata.qza') Artifact.import_data( 'Mapping', {'a': 'dog', 'b': 'cat'}).save(self.metadata_artifact) def tearDown(self): shutil.rmtree(self.tempdir) def _run_command(self, *args): return self.runner.invoke(self.plugin_command, args) def _assertMetadataOutput(self, result, *, exp_tsv, exp_yaml): self.assertEqual(result.exit_code, 0) artifact = Artifact.load(self.output_artifact) action_dir = artifact._archiver.provenance_dir / 'action' if exp_tsv is None: self.assertFalse((action_dir / 'metadata.tsv').exists()) else: with (action_dir / 'metadata.tsv').open() as fh: self.assertEqual(fh.read(), exp_tsv) with (action_dir / 'action.yaml').open() as fh: self.assertIn(exp_yaml, fh.read()) class TestMetadataSupport(MetadataTestsBase): def test_required_metadata_missing(self): result = self._run_command( 'identity-with-metadata', '--i-ints', self.input_artifact, '--o-out', self.output_artifact) self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertIn("Missing option '--m-metadata-file'", result.output) def test_optional_metadata_missing(self): result = self._run_command( 'identity-with-optional-metadata', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--verbose') self._assertMetadataOutput(result, exp_tsv=None, exp_yaml='metadata: null') def test_single_metadata(self): for command in ('identity-with-metadata', 'identity-with-optional-metadata'): result = self._run_command( command, '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file1, '--verbose') exp_tsv = 'id\tcol1\n#q2:types\tcategorical\n0\tfoo\nid1\tbar\n' self._assertMetadataOutput( result, exp_tsv=exp_tsv, exp_yaml="metadata: !metadata 'metadata.tsv'") def test_single_metadata_alt_id_header(self): # Test that the Metadata ID header is preserved, and not normalized to # 'id' (this used to be a bug). ID header normalization should only # happen when 2+ Metadata are being merged. for command in ('identity-with-metadata', 'identity-with-optional-metadata'): result = self._run_command( command, '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file_alt_id_header, '--verbose') exp_tsv = ( '#SampleID\tcol1\n' '#q2:types\tcategorical\n' '0\tfoo\n' 'id1\tbar\n' ) self._assertMetadataOutput( result, exp_tsv=exp_tsv, exp_yaml="metadata: !metadata 'metadata.tsv'") def test_multiple_metadata(self): for command in ('identity-with-metadata', 'identity-with-optional-metadata'): result = self._run_command( command, '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file_alt_id_header, '--m-metadata-file', self.metadata_file2, '--m-metadata-file', self.metadata_artifact, '--verbose') exp_tsv = ( 'id\tcol1\tcol2\ta\tb\n' '#q2:types\tcategorical\tcategorical\tcategorical\tcategorical' '\n0\tfoo\tbaz\tdog\tcat\n' ) exp_yaml = "metadata: !metadata '%s:metadata.tsv'" % ( Artifact.load(self.metadata_artifact).uuid) self._assertMetadataOutput(result, exp_tsv=exp_tsv, exp_yaml=exp_yaml) def test_invalid_metadata_merge(self): for command in ('identity-with-metadata', 'identity-with-optional-metadata'): result = self._run_command( command, '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file1, '--m-metadata-file', self.metadata_file1) self.assertNotEqual(result.exit_code, 0) self.assertIn('overlapping columns', result.output) class TestMetadataColumnSupport(MetadataTestsBase): # Neither md file or column params provided def test_required_missing_file_and_column(self): result = self._run_command( 'identity-with-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact) self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertIn("Missing option '--m-metadata-file'", result.output) # md file param missing, md column param & value provided def test_required_missing_file(self): result = self._run_command( 'identity-with-metadata-column', '--i-ints', self.input_artifact, '--m-metadata-column', 'a', '--o-out', self.output_artifact) self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertIn("Missing option '--m-metadata-file'", result.output) # md file param & value provided, md column param missing def test_required_missing_column(self): result = self._run_command( 'identity-with-metadata-column', '--i-ints', self.input_artifact, '--m-metadata-file', self.metadata_file1, '--o-out', self.output_artifact) self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertIn("Missing option '--m-metadata-column'", result.output) def test_optional_metadata_missing(self): result = self._run_command( 'identity-with-optional-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--verbose') self._assertMetadataOutput(result, exp_tsv=None, exp_yaml='metadata: null') def test_optional_metadata_without_column(self): result = self._run_command( 'identity-with-optional-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file1) self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertIn("Missing option '--m-metadata-column'", result.output) def test_optional_column_without_metadata(self): result = self._run_command( 'identity-with-optional-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-column', 'col1') self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertIn("Missing option '--m-metadata-file'", result.output) def test_single_metadata(self): for command in ('identity-with-metadata-column', 'identity-with-optional-metadata-column'): result = self._run_command( command, '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file1, '--m-metadata-column', 'col1', '--verbose') exp_tsv = 'id\tcol1\n#q2:types\tcategorical\n0\tfoo\nid1\tbar\n' if result.exit_code != 0: raise ValueError(result.exception) self._assertMetadataOutput( result, exp_tsv=exp_tsv, exp_yaml="metadata: !metadata 'metadata.tsv'") def test_multiple_metadata(self): for command in ('identity-with-metadata-column', 'identity-with-optional-metadata-column'): result = self._run_command( command, '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file1, '--m-metadata-file', self.metadata_file2, '--m-metadata-file', self.metadata_artifact, '--m-metadata-column', 'col2', '--verbose') self.assertEqual(result.exit_code, 1) self.assertIn('\'--m-metadata-file\' was specified multiple times', result.output) def test_multiple_metadata_column(self): result = self._run_command( 'identity-with-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file1, '--m-metadata-file', self.metadata_file2, '--m-metadata-column', 'col1', '--m-metadata-column', 'col2') self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertIn('\'--m-metadata-file\' was specified multiple times', result.output) def test_categorical_metadata_column(self): result = self._run_command( 'identity-with-categorical-metadata-column', '--help') help_text = result.output self.assertIn( '--m-metadata-column COLUMN MetadataColumn[Categorical]', help_text) result = self._run_command( 'identity-with-categorical-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file_mixed_types, '--m-metadata-column', 'strings', '--verbose') exp_tsv = 'id\tstrings\n#q2:types\tcategorical\nid1\tabc\nid2\tdef\n' self._assertMetadataOutput( result, exp_tsv=exp_tsv, exp_yaml="metadata: !metadata 'metadata.tsv'") def test_categorical_metadata_column_type_mismatch(self): result = self._run_command( 'identity-with-categorical-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file_mixed_types, '--m-metadata-column', 'numbers') self.assertEqual(result.exit_code, 1) self.assertIn("Metadata column", result.output) self.assertIn("numeric", result.output) self.assertIn("expected Categorical", result.output) def test_numeric_metadata_column(self): result = self._run_command( 'identity-with-numeric-metadata-column', '--help') help_text = result.output self.assertIn('--m-metadata-column COLUMN MetadataColumn[Numeric]', help_text) result = self._run_command( 'identity-with-numeric-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file_mixed_types, '--m-metadata-column', 'numbers', '--verbose') exp_tsv = 'id\tnumbers\n#q2:types\tnumeric\nid1\t42\nid2\t-1.5\n' self._assertMetadataOutput( result, exp_tsv=exp_tsv, exp_yaml="metadata: !metadata 'metadata.tsv'") def test_numeric_metadata_column_type_mismatch(self): result = self._run_command( 'identity-with-numeric-metadata-column', '--i-ints', self.input_artifact, '--o-out', self.output_artifact, '--m-metadata-file', self.metadata_file_mixed_types, '--m-metadata-column', 'strings') self.assertEqual(result.exit_code, 1) self.assertIn("Metadata column", result.output) self.assertIn("categorical", result.output) self.assertIn("expected Numeric", result.output) class TestCollectionSupport(unittest.TestCase): def setUp(self): get_dummy_plugin() self.runner = CliRunner() self.plugin_command = RootCommand().get_command( ctx=None, name='dummy-plugin') self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.art1_path = os.path.join(self.tempdir, 'art1.qza') self.art2_path = os.path.join(self.tempdir, 'art2.qza') self.art1 = Artifact.import_data(SingleInt, 0) self.art2 = Artifact.import_data(SingleInt, 1) self.output = os.path.join(self.tempdir, 'out') self.output2 = os.path.join(self.tempdir, 'out2') def tearDown(self): shutil.rmtree(self.tempdir) def _run_command(self, *args): return self.runner.invoke(self.plugin_command, args) def test_collection_roundtrip_list(self): result = self._run_command( 'list-params', '--p-ints', '0', '--p-ints', '1', '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) result = self._run_command( 'list-of-ints', '--i-ints', self.output, '--o-output', self.output2, '--verbose' ) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_collection_roundtrip_dict_keyed(self): result = self._run_command( 'dict-params', '--p-ints', 'foo:0', '--p-ints', 'bar:1', '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) result = self._run_command( 'dict-of-ints', '--i-ints', self.output, '--o-output', self.output2, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) def test_collection_roundtrip_dict_unkeyed(self): result = self._run_command( 'dict-params', '--p-ints', '0', '--p-ints', '1', '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) result = self._run_command( 'dict-of-ints', '--i-ints', self.output, '--o-output', self.output2, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_de_facto_list(self): self.art1.save(self.art1_path) self.art2.save(self.art2_path) result = self._run_command( 'list-of-ints', '--i-ints', self.art1_path, '--i-ints', self.art2_path, '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_de_facto_dict_keyed(self): self.art1.save(self.art1_path) self.art2.save(self.art2_path) result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{self.art1_path}', '--i-ints', f'bar:{self.art2_path}', '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) def test_de_facto_dict_unkeyed(self): self.art1.save(self.art1_path) self.art2.save(self.art2_path) result = self._run_command( 'dict-of-ints', '--i-ints', self.art1_path, '--i-ints', self.art2_path, '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['0'].view(int), 0) self.assertEqual(collection['1'].view(int), 1) self.assertEqual(list(collection.keys()), ['0', '1']) def test_mixed_keyed_unkeyed_inputs(self): self.art1.save(self.art1_path) self.art2.save(self.art2_path) result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{self.art1_path}', '--i-ints', self.art2_path, '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('Keyed values cannot be mixed with unkeyed values.', str(result.exception)) result = self._run_command( 'dict-of-ints', '--i-ints', self.art1_path, '--i-ints', f'bar:{self.art2_path}', '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('Keyed values cannot be mixed with unkeyed values.', str(result.exception)) def test_mixed_keyed_unkeyed_params(self): result = self._run_command( 'dict-params', '--p-ints', 'foo:0', '--p-ints', '1', '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('The unkeyed value <1> has been mixed with keyed values.' ' All values must be keyed or unkeyed', str(result.exception)) result = self._run_command( 'dict-params', '--p-ints', '0', '--p-ints', 'bar:1', '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn('The keyed value has been mixed with unkeyed' ' values. All values must be keyed or unkeyed', str(result.exception)) def test_keyed_path_with_tilde(self): self.art1.save(self.art1_path) self.art2.save(self.art2_path) tmp = tempfile.gettempdir() tempdir = os.path.basename(self.tempdir) with modified_environ(HOME=tmp): result = self._run_command( 'dict-of-ints', '--i-ints', f'foo:{os.path.join("~", tempdir, "art1.qza")}', '--i-ints', f'bar:{os.path.join("~", tempdir, "art2.qza")}', '--o-output', self.output, '--verbose') self.assertEqual(result.exit_code, 0) collection = ResultCollection.load(self.output) self.assertEqual(collection['foo'].view(int), 0) self.assertEqual(collection['bar'].view(int), 1) self.assertEqual(list(collection.keys()), ['foo', 'bar']) def test_directory_with_non_artifacts(self): input_dir = os.path.join(self.tempdir, 'in') os.mkdir(input_dir) artifact_path = os.path.join(input_dir, 'a.qza') artifact = Artifact.import_data(IntSequence1, [0, 42, 43]) artifact.save(artifact_path) with open(os.path.join(input_dir, 'bad.txt'), 'w') as fh: fh.write('This file is not an artifact') result = self._run_command( 'list-of-ints', '--i-ints', input_dir, '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn("Invalid value for '--i-ints':", result.output) def test_empty_directory(self): result = self._run_command( 'list-of-ints', '--i-ints', self.tempdir, '--o-output', self.output, '--verbose' ) self.assertEqual(result.exit_code, 1) self.assertIn(f"Provided directory '{self.tempdir}' is empty.", result.output) @contextlib.contextmanager def modified_environ(*remove, **update): """ Taken from: https://stackoverflow.com/a/34333710. Updating the os.environ dict only modifies the environment variables from the perspective of the current Python process, so this isn't dangerous at all. Temporarily updates the ``os.environ`` dictionary in-place. The ``os.environ`` dictionary is updated in-place so that the modification is sure to work in all situations. :param remove: Environment variables to remove. :param update: Dictionary of environment variables and values to add/update. """ env = os.environ update = update or {} remove = remove or [] # List of environment variables being updated or removed. stomped = (set(update.keys()) | set(remove)) & set(env.keys()) # Environment variables and values to restore on exit. update_after = {k: env[k] for k in stomped} # Environment variables and values to remove on exit. remove_after = frozenset(k for k in update if k not in env) try: env.update(update) [env.pop(k, None) for k in remove] yield finally: env.update(update_after) [env.pop(k) for k in remove_after] if __name__ == "__main__": unittest.main() q2cli-2024.5.0/q2cli/tests/test_core.py000066400000000000000000000416731462552630000175110ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os.path import pathlib import shutil import tempfile import unittest import configparser import zipfile import pandas as pd from click.testing import CliRunner from qiime2 import Artifact from qiime2.core.testing.type import IntSequence1 from qiime2.core.testing.util import get_dummy_plugin from qiime2.sdk.util import camel_to_snake from qiime2.sdk.usage import UsageVariable from qiime2.sdk import PluginManager from qiime2.core.archive.provenance_lib import DummyArtifacts, ProvDAG from qiime2.core.archive.provenance_lib.replay import ( ReplayConfig, param_is_metadata_column, dump_recorded_md_file, ReplayNamespaces, build_import_usage, build_action_usage, ActionCollections, replay_provenance, replay_supplement ) from qiime2.core.archive.provenance_lib.usage_drivers import ReplayPythonUsage import q2cli import q2cli.util import q2cli.builtin.info import q2cli.builtin.tools from q2cli.commands import RootCommand from q2cli.core.config import CLIConfig from q2cli.core.usage import ReplayCLIUsage, CLIUsageVariable class TestOption(unittest.TestCase): def setUp(self): get_dummy_plugin() self.runner = CliRunner() self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.parser = configparser.ConfigParser() self.path = os.path.join(q2cli.util.get_app_dir(), 'cli-colors.theme') def tearDown(self): shutil.rmtree(self.tempdir) def _assertRepeatedOptionError(self, result, option): self.assertEqual(result.exit_code, 1) self.assertTrue(result.output.startswith('Usage:')) self.assertRegex(result.output, '.*%s.* was specified multiple times' % option) def test_repeated_eager_option_with_callback(self): result = self.runner.invoke( q2cli.builtin.tools.tools, ['list-types', '--tsv', '--tsv']) self._assertRepeatedOptionError(result, '--tsv') def test_repeated_builtin_flag(self): result = self.runner.invoke( q2cli.builtin.tools.tools, ['import', '--input-path', 'a', '--input-path', 'b']) self._assertRepeatedOptionError(result, '--input-path') def test_repeated_action_flag(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') out_path = os.path.join(self.tempdir, 'out.qza') result = self.runner.invoke( command, ['no-input-method', '--o-out', out_path, '--verbose', '--verbose']) self._assertRepeatedOptionError(result, '--verbose') def test_repeated_builtin_option(self): input_path = os.path.join(self.tempdir, 'ints.txt') with open(input_path, 'w') as f: f.write('42\n43\n44\n') output_path = os.path.join(self.tempdir, 'out.qza') result = self.runner.invoke( q2cli.builtin.tools.tools, ['import', '--input-path', input_path, '--output-path', output_path, '--type', 'IntSequence1', '--type', 'IntSequence1']) self._assertRepeatedOptionError(result, '--type') def test_repeated_action_option(self): qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') out_path = os.path.join(self.tempdir, 'out.qza') result = self.runner.invoke( command, ['no-input-method', '--o-out', out_path, '--o-out', out_path]) self._assertRepeatedOptionError(result, '--o-out') def test_repeated_multiple_option(self): input_path = os.path.join(self.tempdir, 'ints.qza') artifact = Artifact.import_data(IntSequence1, [0, 42, 43], list) artifact.save(input_path) metadata_path1 = os.path.join(self.tempdir, 'metadata1.tsv') with open(metadata_path1, 'w') as f: f.write('id\tcol1\nid1\tfoo\nid2\tbar\n') metadata_path2 = os.path.join(self.tempdir, 'metadata2.tsv') with open(metadata_path2, 'w') as f: f.write('id\tcol2\nid1\tbaz\nid2\tbaa\n') output_path = os.path.join(self.tempdir, 'out.qza') qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') result = self.runner.invoke( command, ['identity-with-metadata', '--i-ints', input_path, '--o-out', output_path, '--m-metadata-file', metadata_path1, '--m-metadata-file', metadata_path2, '--verbose']) self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.exists(output_path)) self.assertEqual(Artifact.load(output_path).view(list), [0, 42, 43]) def test_config_expected(self): self.parser['type'] = {'underline': 't'} with open(self.path, 'w') as fh: self.parser.write(fh) config = CLIConfig() config.parse_file(self.path) self.assertEqual( config.styles['type'], {'underline': True}) def test_config_bad_selector(self): self.parser['tye'] = {'underline': 't'} with open(self.path, 'w') as fh: self.parser.write(fh) config = CLIConfig() with self.assertRaisesRegex( configparser.Error, 'tye.*valid selector.*valid selectors'): config.parse_file(self.path) def test_config_bad_styling(self): self.parser['type'] = {'underlined': 't'} with open(self.path, 'w') as fh: self.parser.write(fh) config = CLIConfig() with self.assertRaisesRegex( configparser.Error, 'underlined.*valid styling.*valid ' 'stylings'): config.parse_file(self.path) def test_config_bad_color(self): self.parser['type'] = {'fg': 'purple'} with open(self.path, 'w') as fh: self.parser.write(fh) config = CLIConfig() with self.assertRaisesRegex( configparser.Error, 'purple.*valid color.*valid colors'): config.parse_file(self.path) def test_config_bad_boolean(self): self.parser['type'] = {'underline': 'g'} with open(self.path, 'w') as fh: self.parser.write(fh) config = CLIConfig() with self.assertRaisesRegex( configparser.Error, 'g.*valid boolean.*valid booleans'): config.parse_file(self.path) def test_no_file(self): config = CLIConfig() with self.assertRaisesRegex( configparser.Error, "'Path' is not a valid filepath."): config.parse_file('Path') class ReplayCLIUsageTests(unittest.TestCase): @classmethod def setUpClass(cls): cls.das = DummyArtifacts() cls.tempdir = cls.das.tempdir cls.pm = PluginManager() @classmethod def tearDownClass(cls): cls.das.free() def test_init_metadata(self): use = ReplayCLIUsage() var = use.init_metadata(name='testing', factory=lambda: None) self.assertEqual(var.name, '') self.assertEqual(var.var_type, 'metadata') def test_init_metadata_with_dumped_md_fn(self): use = ReplayCLIUsage() var = use.init_metadata( name='testing', factory=lambda: None, dumped_md_fn='some_md') self.assertEqual(var.var_type, 'metadata') self.assertEqual(var.name, '"some_md.tsv"') def test_param_is_metadata_col(self): cfg = ReplayConfig(use=ReplayCLIUsage(), use_recorded_metadata=False, pm=self.pm) actual = param_is_metadata_column( cfg, 'metadata', 'dummy_plugin', 'identity_with_metadata_column' ) self.assertTrue(actual) actual = param_is_metadata_column( cfg, 'int1', 'dummy_plugin', 'concatenate_ints' ) self.assertFalse(actual) with self.assertRaisesRegex(KeyError, "No action.*registered.*"): param_is_metadata_column( cfg, 'ints', 'dummy_plugin', 'young' ) with self.assertRaisesRegex(KeyError, "No param.*registered.*"): param_is_metadata_column( cfg, 'thugger', 'dummy_plugin', 'split_ints' ) with self.assertRaisesRegex(KeyError, "No plugin.*registered.*"): param_is_metadata_column( cfg, 'fake_param', 'dummy_hard', 'split_ints' ) def test_dump_recorded_md_file_to_custom_dir(self): dag = self.das.int_seq_with_md.dag uuid = self.das.int_seq_with_md.uuid out_dir = 'custom_dir' provnode = dag.get_node_data(uuid) og_md = provnode.metadata['metadata'] action_name = 'concatenate_ints_0' md_id = 'metadata' fn = 'metadata.tsv' with tempfile.TemporaryDirectory() as tempdir: cfg = ReplayConfig(use=ReplayCLIUsage(), pm=self.pm, md_out_dir=(tempdir + '/' + out_dir)) dump_recorded_md_file(cfg, provnode, action_name, md_id, fn) out_path = pathlib.Path(tempdir) / out_dir / action_name / fn self.assertTrue(out_path.is_file()) dumped_df = pd.read_csv(out_path, sep='\t') pd.testing.assert_frame_equal(dumped_df, og_md) # If we run it again, it shouldn't overwrite 'recorded_metadata', # so we should have two files action_name_2 = 'concatenate_ints_1' md_id2 = 'metadata' fn2 = 'metadata_1.tsv' dump_recorded_md_file(cfg, provnode, action_name_2, md_id2, fn2) out_path2 = pathlib.Path(tempdir) / out_dir / action_name_2 / fn2 # are both files where expected? self.assertTrue(out_path.is_file()) self.assertTrue(out_path2.is_file()) def test_build_import_usage_cli(self): ns = ReplayNamespaces() cfg = ReplayConfig(use=ReplayCLIUsage(), use_recorded_metadata=False, pm=self.pm) dag = self.das.concated_ints_v6.dag import_uuid = '8dea2f1a-2164-4a85-9f7d-e0641b1db22b' import_node = dag.get_node_data(import_uuid) c_to_s_type = camel_to_snake(import_node.type) unq_var_nm = c_to_s_type + '_0' build_import_usage(import_node, ns, cfg) rendered = cfg.use.render() usg_var = ns.get_usg_var_record(import_uuid).variable out_name = usg_var.to_interface_name() self.assertIsInstance(usg_var, UsageVariable) self.assertEqual(usg_var.var_type, 'artifact') self.assertEqual(usg_var.name, unq_var_nm) self.assertRegex(rendered, r'qiime tools import \\') self.assertRegex(rendered, f" --type '{import_node.type}'") self.assertRegex(rendered, " --input-path ") self.assertRegex(rendered, f" --output-path {out_name}") def test_build_action_usage_cli(self): plugin = 'dummy-plugin' action = 'concatenate-ints' cfg = ReplayConfig(use=ReplayCLIUsage(), use_recorded_metadata=False, pm=self.pm) ns = ReplayNamespaces() import_var_1 = CLIUsageVariable( 'imported_ints_0', lambda: None, 'artifact', cfg.use ) import_var_2 = CLIUsageVariable( 'imported_ints_1', lambda: None, 'artifact', cfg.use ) import_uuid_1 = '8dea2f1a-2164-4a85-9f7d-e0641b1db22b' import_uuid_2 = '7727c060-5384-445d-b007-b64b41a090ee' ns.add_usg_var_record(import_uuid_1, 'imported_ints_0') ns.update_usg_var_record(import_uuid_1, import_var_1) ns.add_usg_var_record(import_uuid_2, 'imported_ints_1') ns.update_usg_var_record(import_uuid_2, import_var_2) dag = self.das.concated_ints_v6.dag action_uuid = '5035a60e-6f9a-40d4-b412-48ae52255bb5' node_uuid = '6facaf61-1676-45eb-ada0-d530be678b27' node = dag.get_node_data(node_uuid) actions = ActionCollections( std_actions={action_uuid: {node_uuid: 'concatenated_ints'}} ) unique_var_name = node.action.output_name + '_0' build_action_usage(node, ns, actions.std_actions, action_uuid, cfg) rendered = cfg.use.render() usg_var = ns.get_usg_var_record(node_uuid).variable out_name = usg_var.to_interface_name() self.assertIsInstance(usg_var, UsageVariable) self.assertEqual(usg_var.var_type, 'artifact') self.assertEqual(usg_var.name, unique_var_name) self.assertIn(f'qiime {plugin} {action}', rendered) self.assertIn('--i-ints1 imported-ints-0.qza', rendered) self.assertIn('--i-ints3 imported-ints-1.qza', rendered) self.assertIn('--p-int1 7', rendered) self.assertIn(f'--o-concatenated-ints {out_name}', rendered) def test_replay_optional_param_is_none(self): dag = self.das.int_seq_optional_input.dag with tempfile.TemporaryDirectory() as tempdir: out_path = pathlib.Path(tempdir) / 'ns_coll.txt' replay_provenance(ReplayCLIUsage, dag, out_path, md_out_dir=tempdir) with open(out_path, 'r') as fp: rendered = fp.read() self.assertIn('--i-ints int-sequence1-0.qza', rendered) self.assertIn('--p-num1', rendered) self.assertNotIn('--i-optional1', rendered) self.assertNotIn('--p-num2', rendered) def test_replay_from_provdag_ns_collision(self): """ This artifact's dag contains a few results with the output-name filtered-table, so is a good check for namespace collisions if we're not uniquifying variable names properly. """ with tempfile.TemporaryDirectory() as tempdir: self.das.concated_ints.artifact.save( os.path.join(tempdir, 'c1.qza') ) self.das.other_concated_ints.artifact.save( os.path.join(tempdir, 'c2.qza') ) dag = ProvDAG(tempdir) exp = ['concatenated-ints-0', 'concatenated-ints-1'] with tempfile.TemporaryDirectory() as tempdir: out_path = pathlib.Path(tempdir) / 'ns_coll.txt' replay_provenance(ReplayCLIUsage, dag, out_path, md_out_dir=tempdir) with open(out_path, 'r') as fp: rendered = fp.read() for name in exp: self.assertIn(name, rendered) class WriteReproducibilitySupplementTests(unittest.TestCase): @classmethod def setUpClass(cls): cls.das = DummyArtifacts() cls.tempdir = cls.das.tempdir cls.pm = PluginManager() @classmethod def tearDownClass(cls): cls.das.free() def test_replay_supplement_from_fp(self): fp = self.das.concated_ints_with_md.filepath with tempfile.TemporaryDirectory() as tempdir: out_fp = os.path.join(tempdir, 'supplement.zip') replay_supplement( usage_drivers=[ReplayPythonUsage, ReplayCLIUsage], payload=fp, out_fp=out_fp ) self.assertTrue(zipfile.is_zipfile(out_fp)) exp = { 'supplement/', 'supplement/python3_replay.py', 'supplement/cli_replay.sh', 'supplement/citations.bib', 'supplement/recorded_metadata/', 'supplement/recorded_metadata/' 'dummy_plugin_identity_with_metadata_0/metadata_0.tsv', } with zipfile.ZipFile(out_fp, 'r') as myzip: namelist_set = set(myzip.namelist()) for item in exp: self.assertIn(item, namelist_set) def test_replay_supplement_from_provdag(self): dag = self.das.concated_ints_with_md.dag with tempfile.TemporaryDirectory() as tempdir: out_fp = os.path.join(tempdir, 'supplement.zip') replay_supplement( usage_drivers=[ReplayPythonUsage, ReplayCLIUsage], payload=dag, out_fp=out_fp ) self.assertTrue(zipfile.is_zipfile(out_fp)) exp = { 'supplement/', 'supplement/python3_replay.py', 'supplement/cli_replay.sh', 'supplement/citations.bib', 'supplement/recorded_metadata/', 'supplement/recorded_metadata/' 'dummy_plugin_identity_with_metadata_0/metadata_0.tsv', } with zipfile.ZipFile(out_fp, 'r') as myzip: namelist_set = set(myzip.namelist()) for item in exp: self.assertIn(item, namelist_set) if __name__ == "__main__": unittest.main() q2cli-2024.5.0/q2cli/tests/test_dev.py000066400000000000000000000175021462552630000173310ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os import shutil import unittest import tempfile import configparser from click.testing import CliRunner from qiime2 import Artifact from qiime2.core.testing.type import IntSequence1 from qiime2.core.testing.util import get_dummy_plugin import q2cli.util from q2cli.builtin.dev import dev class TestDev(unittest.TestCase): path = os.path.join(q2cli.util.get_app_dir(), 'cli-colors.theme') old_settings = None if os.path.exists(path): old_settings = configparser.ConfigParser() old_settings.read(path) def setUp(self): get_dummy_plugin() self.parser = configparser.ConfigParser() self.runner = CliRunner(mix_stderr=False) self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.generated_config = os.path.join(self.tempdir, 'generated-theme') self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.artifact1_path = os.path.join(self.tempdir, 'a1.qza') self.mapping_path = os.path.join(self.tempdir, 'mapping.qza') artifact1 = Artifact.import_data(IntSequence1, [0, 42, 43]) artifact1.save(self.artifact1_path) self.artifact1_root_dir = str(artifact1.uuid) mapping = Artifact.import_data('Mapping', {'foo': '42'}) mapping.save(self.mapping_path) self.config = os.path.join(self.tempdir, 'good-config.ini') self.parser['type'] = {'underline': 't'} with open(self.config, 'w') as fh: self.parser.write(fh) def tearDown(self): if self.old_settings is not None: with open(self.path, 'w') as fh: self.old_settings.write(fh) shutil.rmtree(self.tempdir) def test_import_theme(self): result = self.runner.invoke( dev, ['import-theme', '--theme', self.config]) self.assertEqual(result.exit_code, 0) def test_export_default_theme(self): result = self.runner.invoke( dev, ['export-default-theme', '--output-path', self.generated_config]) self.assertEqual(result.exit_code, 0) def test_reset_theme(self): result = self.runner.invoke( dev, ['reset-theme', '--yes']) self.assertEqual(result.exit_code, 0) def test_reset_theme_no_yes(self): result = self.runner.invoke( dev, ['reset-theme']) self.assertNotEqual(result.exit_code, 0) # result_type & result_data tests def test_assert_result_type_artifact_success(self): result = self.runner.invoke(dev, ['assert-result-type', self.mapping_path, '--qiime-type', 'Mapping']) # single regex to account for tempdir path expected_regex = r'The input file \(.*mapping.qza\) type and the'\ r' expected type \(Mapping\) match' self.assertEqual(result.exit_code, 0) self.assertRegex(result.stdout, expected_regex) def test_assert_result_type_visualization_success(self): dummy_plugin = get_dummy_plugin() self.viz_path = os.path.join(self.tempdir, 'viz.qzv') most_common_viz = dummy_plugin.actions['most_common_viz'] viz = most_common_viz(Artifact.load(self.artifact1_path)) viz.visualization.save(self.viz_path) result = self.runner.invoke(dev, ['assert-result-type', self.viz_path, '--qiime-type', 'Visualization']) expected_regex = r'The input file \(.*viz\.qzv\) type and the'\ r' expected type \(Visualization\) match' self.assertEqual(result.exit_code, 0) self.assertRegex(result.stdout, expected_regex) def test_assert_result_type_load_failure(self): result = self.runner.invoke(dev, ['assert-result-type', 'turkey_sandwhere.qza', '--qiime-type', 'Mapping']) self.assertEqual(result.exit_code, 1) self.assertRegex(result.stderr, r'File\s*\'turkey_sandwhere\.qza\'\s*does not exist') def test_assert_result_type_invalid_qiime_type(self): result = self.runner.invoke(dev, ['assert-result-type', self.mapping_path, '--qiime-type', 'Squid']) self.assertEqual(result.exit_code, 1) self.assertIn('Expected Squid, observed Mapping', result.stderr) def test_assert_result_data_success(self): result = self.runner.invoke(dev, ['assert-result-data', self.mapping_path, '--zip-data-path', 'mapping.tsv', '--expression', '42']) self.assertEqual(result.exit_code, 0) self.assertIn(r'"42" was found in mapping.tsv', result.stdout) def test_assert_result_data_load_failure(self): result = self.runner.invoke(dev, ['assert-result-data', 'turkey_sandwhen.qza', '--zip-data-path', 'mapping.tsv', '--expression', '42']) self.assertEqual(result.exit_code, 1) self.assertRegex(result.stderr, r'File\s*\'turkey_sandwhen\.qza\'\s*does not exist') def test_assert_result_data_zip_data_path_zero_matches(self): result = self.runner.invoke(dev, ['assert-result-data', self.mapping_path, '--zip-data-path', 'turkey_sandwhy.tsv', '--expression', '42']) self.assertEqual(result.exit_code, 1) self.assertRegex(result.stderr, r'did not produce exactly one match.\n' r'Matches: \[\]\n') def test_assert_result_data_zip_data_path_multiple_matches(self): self.double_path = os.path.join(self.tempdir, 'double.qza') double_artifact = Artifact.import_data('SingleInt', 3) double_artifact.save(self.double_path) result = self.runner.invoke(dev, ['assert-result-data', self.double_path, '--zip-data-path', 'file*.txt', '--expression', '3']) self.assertEqual(result.exit_code, 1) self.assertRegex(result.stderr, r'Value provided for zip_data_path' r' \(file\*\.txt\) did not produce' r' exactly one match\.') def test_assert_result_data_match_expression_not_found(self): result = self.runner.invoke(dev, ['assert-result-data', self.mapping_path, '--zip-data-path', 'mapping.tsv', '--expression', 'foobar']) self.assertEqual(result.exit_code, 1) self.assertRegex(result.stderr, r'Expression \'foobar\'' r' not found in .*\/data\/mapping\.tsv\.') q2cli-2024.5.0/q2cli/tests/test_mystery_stew.py000066400000000000000000000026611462552630000213310ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os import subprocess import tempfile from q2cli.core.usage import CLIUsage from q2cli.util import get_plugin_manager import pytest def _labeler(val): if hasattr(val, 'id'): return val.id return val def get_tests(): tests = [] pm = get_plugin_manager() try: plugin = pm.plugins['mystery-stew'] except KeyError: return tests for action in plugin.actions.values(): for name in action.examples: tests.append((action, name)) return tests @pytest.mark.parametrize('action,example', get_tests(), ids=_labeler) def test_mystery_stew(action, example): example_f = action.examples[example] use = CLIUsage(enable_assertions=True) example_f(use) rendered = '\n'.join(use.recorder) with tempfile.TemporaryDirectory() as tmpdir: for ref, data in use.get_example_data(): data.save(os.path.join(tmpdir, ref)) subprocess.run([rendered], shell=True, check=True, cwd=tmpdir, env={**os.environ}) q2cli-2024.5.0/q2cli/tests/test_tools.py000066400000000000000000001441251462552630000177150ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os import gc import re import shutil import unittest from unittest.mock import patch import tempfile import zipfile import bibtexparser as bp from click.testing import CliRunner from qiime2 import Artifact, Metadata from qiime2.core.testing.util import get_dummy_plugin from qiime2.metadata.base import SUPPORTED_COLUMN_TYPES from qiime2.core.cache import Cache from qiime2.sdk.result import Result from qiime2.sdk.plugin_manager import PluginManager from q2cli.util import load_metadata from q2cli.builtin.tools import tools from q2cli.commands import RootCommand from q2cli.core.usage import ReplayCLIUsage class TestCastMetadata(unittest.TestCase): def setUp(self): self.runner = CliRunner() self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.metadata_file = os.path.join( self.tempdir, 'metadata.tsv') with open(self.metadata_file, 'w') as f: f.write('id\tnumbers\tstrings\n0\t42\tabc\n1\t-1.5\tdef') self.cast_metadata_dump = \ ('id\tnumbers\tstrings\n#q2:types\tcategorical\tcategorical\n0\t42' '\tabc\n1\t-1.5\tdef\n\n') self.output_file = os.path.join( self.tempdir, 'test_output.tsv') def test_input_invalid_column_type(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers:foo', '--output-file', self.output_file]) self.assertNotEqual(result.exit_code, 0) self.assertIn('Unknown column type provided.', result.output) def test_input_duplicate_columns(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers:numerical', '--cast', 'numbers:categorical', '--output-file', self.output_file]) self.assertNotEqual(result.exit_code, 0) self.assertIn( '"numbers" appears in cast more than once.', result.output) def test_input_invalid_cast_format_missing_colon(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers', '--output-file', self.output_file]) self.assertNotEqual(result.exit_code, 0) self.assertIn('Missing `:` in --cast numbers', result.output) def test_input_invalid_cast_format_extra_colon(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers::', '--output-file', self.output_file]) self.assertNotEqual(result.exit_code, 0) self.assertIn('Incorrect number of fields in --cast numbers::', result.output) self.assertIn('Observed 3', result.output) def test_error_on_extra(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'extra:numeric', '--output-file', self.output_file]) self.assertNotEqual(result.exit_code, 0) self.assertIn( "The following cast columns were not found within the" " metadata: extra", result.output) def test_error_on_missing(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers:categorical', '--error-on-missing', '--output-file', self.output_file]) self.assertNotEqual(result.exit_code, 0) self.assertIn( "The following columns within the metadata" " were not provided in the cast: strings", result.output) def test_extra_columns_removed(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers:categorical', '--cast', 'extra:numeric', '--ignore-extra', '--output-file', self.output_file]) self.assertEqual(result.exit_code, 0) casted_metadata = load_metadata(self.output_file) self.assertNotIn('extra', casted_metadata.columns.keys()) def test_complete_successful_run(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers:categorical', '--output-file', self.output_file]) self.assertEqual(result.exit_code, 0) input_metadata = load_metadata(self.metadata_file) self.assertEqual('numeric', input_metadata.columns['numbers'].type) casted_metadata = load_metadata(self.output_file) self.assertEqual('categorical', casted_metadata.columns['numbers'].type) def test_write_to_stdout(self): result = self.runner.invoke( tools, ['cast-metadata', self.metadata_file, '--cast', 'numbers:categorical']) self.assertEqual(result.exit_code, 0) self.assertEqual(self.cast_metadata_dump, result.output) def test_valid_column_types(self): result = self.runner.invoke(tools, ['cast-metadata', '--help']) for col_type in SUPPORTED_COLUMN_TYPES: self.assertIn(col_type, result.output) class TestInspectMetadata(unittest.TestCase): def setUp(self): dummy_plugin = get_dummy_plugin() self.runner = CliRunner() self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') self.metadata_file_mixed_types = os.path.join( self.tempdir, 'metadata-mixed-types.tsv') with open(self.metadata_file_mixed_types, 'w') as f: f.write('id\tnumbers\tstrings\n0\t42\tabc\n1\t-1.5\tdef\n') self.bad_metadata_file = os.path.join( self.tempdir, 'bad-metadata.tsv') with open(self.bad_metadata_file, 'w') as f: f.write('wrong\tnumbers\tstrings\nid1\t42\tabc\nid2\t-1.5\tdef\n') self.metadata_artifact = os.path.join(self.tempdir, 'metadata.qza') Artifact.import_data( 'Mapping', {'a': 'dog', 'b': 'cat'}).save(self.metadata_artifact) self.ints1 = os.path.join(self.tempdir, 'ints1.qza') ints1 = Artifact.import_data( 'IntSequence1', [0, 42, 43], list) ints1.save(self.ints1) self.ints2 = os.path.join(self.tempdir, 'ints') ints1.export_data(self.ints2) self.viz = os.path.join(self.tempdir, 'viz.qzv') most_common_viz = dummy_plugin.actions['most_common_viz'] self.viz = most_common_viz(ints1).visualization.save(self.viz) def tearDown(self): shutil.rmtree(self.tempdir) def test_artifact_w_metadata(self): result = self.runner.invoke( tools, ['inspect-metadata', self.metadata_artifact]) self.assertEqual(result.exit_code, 0) self.assertIn('COLUMN NAME TYPE', result.output) self.assertIn("=========== ===========", result.output) self.assertIn("a categorical", result.output) self.assertIn("b categorical", result.output) self.assertIn("IDS: 1", result.output) self.assertIn("COLUMNS: 2", result.output) def test_artifact_no_metadata(self): result = self.runner.invoke(tools, ['inspect-metadata', self.ints1]) self.assertEqual(result.exit_code, 1) self.assertIn("IntSequence1 cannot be viewed as QIIME 2 metadata", result.output) def test_visualization(self): # make a viz first: qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') # build output parameter arguments and expected output file names viz_path = os.path.join(self.tempdir, 'viz.qzv') result = self.runner.invoke( command, ['most-common-viz', '--i-ints', self.ints1, '--o-visualization', viz_path, '--verbose']) result = self.runner.invoke(tools, ['inspect-metadata', viz_path]) self.assertEqual(result.exit_code, 1) self.assertIn("Visualizations cannot be viewed as QIIME 2 metadata", result.output) def test_metadata_file(self): result = self.runner.invoke( tools, ['inspect-metadata', self.metadata_file_mixed_types]) self.assertEqual(result.exit_code, 0) self.assertIn('COLUMN NAME TYPE', result.output) self.assertIn("=========== ===========", result.output) self.assertIn("numbers numeric", result.output) self.assertIn("strings categorical", result.output) self.assertIn("IDS: 2", result.output) self.assertIn("COLUMNS: 2", result.output) def test_bad_metadata_file(self): result = self.runner.invoke( tools, ['inspect-metadata', self.bad_metadata_file]) self.assertEqual(result.exit_code, 1) self.assertIn("'wrong'", result.output) def test_tsv(self): result = self.runner.invoke(tools, [ 'inspect-metadata', self.metadata_file_mixed_types, '--tsv']) self.assertEqual(result.exit_code, 0) self.assertIn('COLUMN NAME\tTYPE', result.output) self.assertIn("numbers\tnumeric", result.output) self.assertIn("strings\tcategorical", result.output) self.assertNotIn("=", result.output) self.assertNotIn("IDS:", result.output) self.assertNotIn("COLUMNS:", result.output) def test_merged_metadata(self): result = self.runner.invoke(tools, [ 'inspect-metadata', self.metadata_artifact, self.metadata_file_mixed_types]) self.assertEqual(result.exit_code, 0) self.assertIn('COLUMN NAME TYPE', result.output) self.assertIn("=========== ===========", result.output) self.assertIn("a categorical", result.output) self.assertIn("b categorical", result.output) self.assertIn("numbers numeric", result.output) self.assertIn("strings categorical", result.output) self.assertIn("IDS: 1", result.output) # only 1 ID is shared self.assertIn("COLUMNS: 4", result.output) def test_export_to_dir_w_format(self): output_path = os.path.join(self.tempdir, 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path, '--output-format', 'IntSequenceDirectoryFormat' ]) self.assertEqual(result.exit_code, 0) self.assertTrue(os.path.isdir(output_path)) def test_export_to_dir_no_format(self): output_path = os.path.join(self.tempdir, 'output') self.runner.invoke(tools, [ 'export', '--input-path', self.viz, '--output-path', output_path ]) self.assertTrue(os.path.isdir(output_path)) self.assertIn('index.html', os.listdir(output_path)) self.assertIn('index.tsv', os.listdir(output_path)) def test_export_to_file(self): output_path = os.path.join(self.tempdir, 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path, '--output-format', 'IntSequenceFormatV2' ]) with open(output_path, 'r') as f: file = f.read() self.assertEqual(result.exit_code, 0) self.assertIn('0', file) self.assertIn('42', file) self.assertIn('43', file) def test_export_to_file_creates_directories(self): output_path = os.path.join(self.tempdir, 'somewhere', 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path, '--output-format', 'IntSequenceFormatV2' ]) with open(output_path, 'r') as f: file = f.read() self.assertEqual(result.exit_code, 0) self.assertIn('0', file) self.assertIn('42', file) self.assertIn('43', file) def test_export_visualization_to_dir(self): output_path = os.path.join(self.tempdir, 'output') self.runner.invoke(tools, [ 'export', '--input-path', self.viz, '--output-path', output_path ]) self.assertIn('index.html', os.listdir(output_path)) self.assertIn('index.tsv', os.listdir(output_path)) self.assertTrue(os.path.isdir(output_path)) def test_export_visualization_w_format(self): output_path = os.path.join(self.tempdir, 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.viz, '--output-path', output_path, '--output-format', 'IntSequenceDirectoryFormat' ]) self.assertEqual(result.exit_code, 1) self.assertIn('visualization', result.output) self.assertIn('--output-format', result.output) def test_export_path_file_is_replaced(self): output_path = os.path.join(self.tempdir, 'output') with open(output_path, 'w') as file: file.write('HelloWorld') self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path, '--output-format', 'IntSequenceFormatV2' ]) with open(output_path, 'r') as f: file = f.read() self.assertNotIn('HelloWorld', file) def test_export_to_file_with_format_success_message(self): output_path = os.path.join(self.tempdir, 'output.int') result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path, '--output-format', 'IntSequenceFormatV2' ]) success = 'Exported %s as IntSequenceFormatV2 to file %s\n' % ( self.ints1, output_path) self.assertEqual(success, result.output) def test_export_to_dir_without_format_success_message(self): output_path = os.path.join(self.tempdir, 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path ]) success = 'Exported %s as IntSequenceDirectoryFormat to '\ 'directory %s\n' % (self.ints1, output_path) self.assertEqual(success, result.output) def test_export_visualization_to_dir_success_message(self): output_path = os.path.join(self.tempdir, 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.viz, '--output-path', output_path ]) success = 'Exported %s as Visualization to '\ 'directory %s\n' % (self.viz, output_path) self.assertEqual(success, result.output) def test_extract_to_dir_success_message(self): result = self.runner.invoke(tools, [ 'extract', '--input-path', self.ints1, '--output-path', self.tempdir ]) success = 'Extracted %s to directory %s' % (self.ints1, self.tempdir) self.assertIn(success, result.output) def test_import_from_directory_without_format_success_message(self): output_path = os.path.join(self.tempdir, 'output.qza') result = self.runner.invoke(tools, [ 'import', '--input-path', self.ints2, '--type', 'IntSequence1', '--output-path', output_path ]) success = 'Imported %s as IntSequenceDirectoryFormat to '\ '%s\n' % (self.ints2, output_path) self.assertEqual(success, result.output) def test_import_from_file_with_format_success_message(self): output_path = os.path.join(self.tempdir, 'output.qza') result = self.runner.invoke(tools, [ 'import', '--input-path', os.path.join(self.ints2, 'ints.txt'), '--type', 'IntSequence1', '--output-path', output_path, '--input-format', 'IntSequenceFormat' ]) success = 'Imported %s as IntSequenceFormat to '\ '%s\n' % (os.path.join(self.ints2, 'ints.txt'), output_path) self.assertEqual(success, result.output) class TestExportToFileFormat(TestInspectMetadata): def setUp(self): super().setUp() # Working directory is changed to temp directory to prevent cluttering # the repo directory with test files self.current_dir = os.getcwd() os.chdir(self.tempdir) def tearDown(self): super().tearDown() os.chdir(self.current_dir) def test_export_file_format(self): output_path = os.path.join(os.getcwd(), 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path, '--output-format', 'IntSequenceFormat' ]) success = 'Exported %s as IntSequenceFormat to file %s\n' % \ (self.ints1, output_path) self.assertEqual(success, result.output) def test_export_dir_format(self): result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', os.getcwd(), '--output-format', 'IntSequenceDirectoryFormat' ]) success = 'Exported %s as IntSequenceDirectoryFormat to directory ' \ '%s\n' % (self.ints1, os.getcwd()) self.assertEqual(success, result.output) def test_export_dir_format_nested(self): output_path = os.path.join(os.getcwd(), 'output') result = self.runner.invoke(tools, [ 'export', '--input-path', self.ints1, '--output-path', output_path, '--output-format', 'IntSequenceDirectoryFormat' ]) success = 'Exported %s as IntSequenceDirectoryFormat to directory ' \ '%s\n' % (self.ints1, output_path) self.assertEqual(success, result.output) def test_export_to_filename_without_path(self): output_path = 'output' result = self.runner.invoke(tools, [ 'export', '--input-path', self.viz, '--output-path', output_path ]) success = 'Exported %s as Visualization to '\ 'directory %s\n' % (self.viz, output_path) self.assertEqual(success, result.output) class TestImport(unittest.TestCase): @classmethod def setUpClass(cls): cls.runner = CliRunner() cls.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') cls.in_dir1 = os.path.join(cls.tempdir, 'input1') os.mkdir(cls.in_dir1) with open(os.path.join(cls.in_dir1, 'ints.txt'), 'w') as fh: for i in range(5): fh.write(f'{i}\n') fh.write('a\n') cls.in_dir2 = os.path.join(cls.tempdir, 'input2') os.mkdir(cls.in_dir2) with open(os.path.join(cls.in_dir2, 'ints.txt'), 'w') as fh: fh.write('1\n') fh.write('a\n') fh.write('3\n') cls.cache = Cache(os.path.join(cls.tempdir, 'new_cache')) @classmethod def tearDownClass(cls): shutil.rmtree(cls.tempdir) def test_import_min_validate(self): out_fp = os.path.join(self.tempdir, 'out1.qza') # import with min allows format error outside of min purview # (validate level min checks only first 5 items) result = self.runner.invoke(tools, [ 'import', '--type', 'IntSequence1', '--input-path', self.in_dir1, '--output-path', out_fp, '--validate-level', 'min' ]) self.assertEqual(result.exit_code, 0) # import with max should catch all format errors, max is default result = self.runner.invoke(tools, [ 'import', '--type', 'IntSequence1', '--input-path', self.in_dir1, '--output-path', out_fp ]) self.assertEqual(result.exit_code, 1) self.assertIn('Line 6 is not an integer', result.output) out_fp = os.path.join(self.tempdir, 'out2.qza') # import with min catches format errors within its purview result = self.runner.invoke(tools, [ 'import', '--type', 'IntSequence1', '--input-path', self.in_dir2, '--output-path', out_fp, '--validate-level', 'min' ]) self.assertEqual(result.exit_code, 1) self.assertIn('Line 2 is not an integer', result.output) def test_cache_import_min_validate(self): # import with min allows format error outside of min purview # (validate level min checks only first 5 items) result = self.runner.invoke(tools, [ 'cache-import', '--type', 'IntSequence1', '--input-path', self.in_dir1, '--cache', str(self.cache.path), '--key', 'foo', '--validate-level', 'min' ]) self.assertEqual(result.exit_code, 0) # import with max should catch all format errors, max is default result = self.runner.invoke(tools, [ 'cache-import', '--type', 'IntSequence1', '--input-path', self.in_dir1, '--cache', str(self.cache.path), '--key', 'foo' ]) self.assertEqual(result.exit_code, 1) self.assertIn('Line 6 is not an integer', result.output) # import with min catches format errors within its purview result = self.runner.invoke(tools, [ 'cache-import', '--type', 'IntSequence1', '--input-path', self.in_dir2, '--cache', str(self.cache.path), '--key', 'foo', '--validate-level', 'min' ]) self.assertEqual(result.exit_code, 1) self.assertIn('Line 2 is not an integer', result.output) class TestCacheTools(unittest.TestCase): def setUp(self): get_dummy_plugin() self.runner = CliRunner() self.plugin_command = RootCommand().get_command( ctx=None, name='dummy-plugin') self.tempdir = \ tempfile.TemporaryDirectory(prefix='qiime2-q2cli-test-temp-') self.art1 = Artifact.import_data('IntSequence1', [0, 1, 2]) self.art2 = Artifact.import_data('IntSequence1', [3, 4, 5]) self.art3 = Artifact.import_data('IntSequence1', [6, 7, 8]) self.art4 = Artifact.import_data('IntSequence2', [9, 10, 11]) self.to_import = os.path.join(self.tempdir.name, 'to_import') self.art1.export_data(self.to_import) self.cache = Cache(os.path.join(self.tempdir.name, 'new_cache')) def tearDown(self): self.tempdir.cleanup() def test_cache_create(self): cache_path = os.path.join(self.tempdir.name, 'created_cache') result = self.runner.invoke( tools, ['cache-create', '--cache', cache_path]) success = "Created cache at '%s'\n" % cache_path self.assertEqual(success, result.output) self.assertTrue(Cache.is_cache(cache_path)) def test_cache_remove(self): self.cache.save(self.art1, 'key') self.assertTrue('key' in self.cache.get_keys()) result = self.runner.invoke( tools, ['cache-remove', '--cache', str(self.cache.path), '--key', 'key']) success = "Removed key 'key' from cache '%s'\n" % self.cache.path self.assertEqual(success, result.output) self.assertFalse('key' in self.cache.get_keys()) def test_cache_garbage_collection(self): # Data referenced directly by key self.cache.save(self.art1, 'foo') # Data referenced by pool that is referenced by key pool = self.cache.create_pool(key='bar') pool.save(self.art2) # We will be manually deleting the keys that back these two self.cache.save(self.art3, 'baz') pool = self.cache.create_pool(key='qux') pool.save(self.art4) # What we expect to see before and after gc expected_pre_gc_contents = \ set(('./VERSION', 'keys/foo', 'keys/bar', 'keys/baz', 'keys/qux', f'pools/bar/{self.art2.uuid}', f'pools/qux/{self.art4.uuid}', f'data/{self.art1.uuid}', f'data/{self.art2.uuid}', f'data/{self.art3.uuid}', f'data/{self.art4.uuid}')) expected_post_gc_contents = \ set(('./VERSION', 'keys/foo', 'keys/bar', f'pools/bar/{self.art2.uuid}', f'data/{self.art1.uuid}', f'data/{self.art2.uuid}')) # Assert cache looks how we want pre gc pre_gc_contents = _get_cache_contents(self.cache) self.assertEqual(expected_pre_gc_contents, pre_gc_contents) # Delete keys self.cache.remove(self.cache.keys / 'baz') self.cache.remove(self.cache.keys / 'qux') # Make sure Python's garbage collector gets the process pool symlinks # to the artifact that was keyed on baz and the one in the qux pool gc.collect() result = self.runner.invoke( tools, ['cache-garbage-collection', '--cache', str(self.cache.path)]) success = "Ran garbage collection on cache at '%s'\n" % self.cache.path self.assertEqual(success, result.output) # Assert cache looks how we want post gc post_gc_contents = _get_cache_contents(self.cache) self.assertEqual(expected_post_gc_contents, post_gc_contents) def test_cache_store(self): artifact = os.path.join(self.tempdir.name, 'artifact.qza') self.art1.save(artifact) result = self.runner.invoke( tools, ['cache-store', '--cache', str(self.cache.path), '--artifact-path', artifact, '--key', 'key']) success = "Saved the artifact '%s' to the cache '%s' under the key " \ "'key'\n" % (artifact, self.cache.path) self.assertEqual(success, result.output) def test_cache_fetch(self): artifact = os.path.join(self.tempdir.name, 'artifact.qza') self.cache.save(self.art1, 'key') result = self.runner.invoke( tools, ['cache-fetch', '--cache', str(self.cache.path), '--key', 'key', '--output-path', artifact]) success = "Loaded artifact with the key 'key' from the cache '%s' " \ "and saved it to the file '%s'\n" % (self.cache.path, artifact) self.assertEqual(success, result.output) def test_cache_roundtrip(self): in_artifact = os.path.join(self.tempdir.name, 'in_artifact.qza') out_artifact = os.path.join(self.tempdir.name, 'out_artifact.qza') self.art1.save(in_artifact) result = self.runner.invoke( tools, ['cache-store', '--cache', str(self.cache.path), '--artifact-path', in_artifact, '--key', 'key']) success = "Saved the artifact '%s' to the cache '%s' under the key " \ "'key'\n" % (in_artifact, self.cache.path) self.assertEqual(success, result.output) result = self.runner.invoke( tools, ['cache-fetch', '--cache', str(self.cache.path), '--key', 'key', '--output-path', out_artifact]) success = "Loaded artifact with the key 'key' from the cache '%s' " \ "and saved it to the file '%s'\n" % (self.cache.path, out_artifact) self.assertEqual(success, result.output) artifact = Artifact.load(out_artifact) self.assertEqual([0, 1, 2], artifact.view(list)) def test_cache_status(self): success_template = \ "Status of the cache at the path '%s':\n\n%s\n\n%s\n" # Empty cache result = self.runner.invoke( tools, ['cache-status', '--cache', str(self.cache.path)]) success = \ success_template % (str(self.cache.path), 'No data keys in cache', 'No pool keys in cache') self.assertEqual(success, result.output) # Cache with only data in_artifact = os.path.join(self.tempdir.name, 'in_artifact.qza') self.art1.save(in_artifact) self.runner.invoke( tools, ['cache-store', '--cache', str(self.cache.path), '--artifact-path', in_artifact, '--key', 'key']) result = self.runner.invoke( tools, ['cache-status', '--cache', str(self.cache.path)]) data_output = 'Data keys in cache:\ndata: key -> %s' % \ str(Result.peek(self.cache.data / str(self.art1.uuid))) success = \ success_template % (str(self.cache.path), data_output, 'No pool keys in cache') self.assertEqual(success, result.output) # Cache with data and pool pool = self.cache.create_pool(key='pool') pool.save(self.art2) result = self.runner.invoke( tools, ['cache-status', '--cache', str(self.cache.path)]) pool_output = 'Pool keys in cache:\npool: pool -> size = 1' success = \ success_template % (str(self.cache.path), data_output, pool_output) self.assertEqual(success, result.output) def test_cache_import(self): self.max_diff = None result = self.runner.invoke( tools, ['cache-import', '--type', 'IntSequence1', '--input-path', self.to_import, '--cache', f'{self.cache.path}', '--key', 'foo']) success = 'Imported %s as IntSequenceDirectoryFormat to %s:foo\n' % \ (self.to_import, self.cache.path) self.assertEqual(success, result.output) def _get_cache_contents(cache): """Gets contents of cache not including contents of the artifacts themselves relative to the root of the cache """ cache_contents = set() rel_keys = os.path.relpath(cache.keys, cache.path) rel_data = os.path.relpath(cache.data, cache.path) rel_pools = os.path.relpath(cache.pools, cache.path) rel_cache = os.path.relpath(cache.path, cache.path) for key in os.listdir(cache.keys): cache_contents.add(os.path.join(rel_keys, key)) for art in os.listdir(cache.data): cache_contents.add(os.path.join(rel_data, art)) for pool in os.listdir(cache.pools): for link in os.listdir(os.path.join(cache.pools, pool)): cache_contents.add(os.path.join(rel_pools, pool, link)) for elem in os.listdir(cache.path): if os.path.isfile(os.path.join(cache.path, elem)): cache_contents.add(os.path.join(rel_cache, elem)) return cache_contents class TestPeek(unittest.TestCase): def setUp(self): self.runner = CliRunner() self.tempdir = tempfile.mkdtemp(prefix='qiime2-q2cli-test-temp-') # create artifact self.artifact = os.path.join(self.tempdir, 'artifact.qza') Artifact.import_data( 'Mapping', {'foo': 'bar'}).save(self.artifact) # create visualization qiime_cli = RootCommand() command = qiime_cli.get_command(ctx=None, name='dummy-plugin') self.viz = os.path.join(self.tempdir, 'viz.qzv') self.ints = os.path.join(self.tempdir, 'ints.qza') ints = Artifact.import_data( 'IntSequence1', [0, 42, 43], list) ints.save(self.ints) self.runner.invoke( command, ['most-common-viz', '--i-ints', self.ints, '--o-visualization', self.viz, '--verbose']) def tearDown(self): shutil.rmtree(self.tempdir) def test_single_artifact(self): result = self.runner.invoke(tools, ['peek', self.artifact]) self.assertEqual(result.exit_code, 0) self.assertIn("UUID:", result.output) self.assertIn("Type:", result.output) self.assertIn("Data format:", result.output) self.assertEqual(result.output.count('\n'), 3) def test_single_visualization(self): result = self.runner.invoke(tools, ['peek', self.viz]) self.assertEqual(result.exit_code, 0) self.assertIn("UUID:", result.output) self.assertIn("Type:", result.output) self.assertNotIn("Data format:", result.output) self.assertEqual(result.output.count('\n'), 2) def test_artifact_and_visualization(self): result = self.runner.invoke(tools, ['peek', self.artifact, self.viz]) self.assertEqual(result.exit_code, 0) self.assertIn("UUID", result.output) self.assertIn("Type", result.output) self.assertIn("Data Format", result.output) self.assertIn("N/A", result.output) self.assertEqual(result.output.count('\n'), 3) def test_single_file_tsv(self): result = self.runner.invoke(tools, ['peek', '--tsv', self.artifact]) self.assertIn("Filename\tType\tUUID\tData Format\n", result.output) self.assertIn("artifact.qza", result.output) self.assertEqual(result.output.count('\t'), 6) self.assertEqual(result.output.count('\n'), 2) def test_multiple_file_tsv(self): result = self.runner.invoke(tools, ['peek', '--tsv', self.artifact, self.viz]) self.assertIn("Filename\tType\tUUID\tData Format\n", result.output) self.assertIn("artifact.qza", result.output) self.assertIn("viz.qzv", result.output) self.assertEqual(result.output.count('\t'), 9) self.assertEqual(result.output.count('\n'), 3) class TestListTypes(unittest.TestCase): def setUp(self): self.runner = CliRunner() self.pm = PluginManager() def tearDown(self): pass def test_list_all_types(self): result = self.runner.invoke(tools, ['list-types']) self.assertEqual(result.exit_code, 0) for name, artifact_class_record in self.pm.artifact_classes.items(): self.assertIn(name, result.output) self.assertIn(artifact_class_record.description, result.output) def test_list_types_fuzzy(self): types = list(self.pm.artifact_classes)[:5] result = self.runner.invoke(tools, ['list-types', *types]) self.assertEqual(result.exit_code, 0) # split on \n\n because types and their description are separated # by two newlines # len - 1 because split includes '' for the last \n\n split self.assertGreaterEqual(len(result.output.split('\n\n')) - 1, len(types)) def test_list_types_strict(self): types = list(self.pm.artifact_classes)[:5] result = self.runner.invoke(tools, ['list-types', '--strict', *types]) self.assertEqual(result.exit_code, 0) self.assertEqual(len(result.output.split('\n\n')) - 1, len(types)) result = self.runner.invoke(tools, ['list-types', '--strict', types[0] + 'x']) self.assertEqual(result.exit_code, 0) self.assertEqual(len(result.output), 0) result = self.runner.invoke(tools, ['list-types', '--strict', *types, types[0] + 'x']) self.assertEqual(result.exit_code, 0) self.assertEqual(len(result.output.split('\n\n')) - 1, len(types)) def test_list_types_tsv(self): result = self.runner.invoke(tools, ['list-types', '--tsv']) self.assertEqual(result.exit_code, 0) # len - 1 because \n split produces a final '' self.assertEqual(len(result.output.split('\n')) - 1, len(self.pm.artifact_classes)) no_description_count = 0 for name, artifact_class_record in self.pm.artifact_classes.items(): self.assertIn(name, result.output) self.assertIn(artifact_class_record.description, result.output) if artifact_class_record.description == '': no_description_count += 1 self.assertEqual(no_description_count, result.output.count('\t\n')) class TestListFormats(unittest.TestCase): def setUp(self): self.runner = CliRunner() self.pm = PluginManager() def tearDown(self): pass def test_list_all_importable_formats(self): result = self.runner.invoke(tools, ['list-formats', '--importable']) self.assertEqual(result.exit_code, 0) for name, format_record in self.pm.importable_formats.items(): self.assertIn(name, result.output) docstring = format_record.format.__doc__ if docstring: description = docstring.split('\n\n')[0].strip() for word in description: self.assertIn(word.strip(), result.output) def test_list_all_exportable_formats(self): result = self.runner.invoke(tools, ['list-formats', '--exportable']) self.assertEqual(result.exit_code, 0) for name, format_record in self.pm.exportable_formats.items(): self.assertIn(name, result.output) docstring = format_record.format.__doc__ if docstring: description = docstring.split('\n\n')[0].strip() for word in description: self.assertIn(word.strip(), result.output) def test_list_formats_fuzzy(self): formats = list(self.pm.importable_formats)[:5] result = self.runner.invoke(tools, ['list-formats', '--importable', *formats]) self.assertEqual(result.exit_code, 0) # see TestListTypes.test_list_types_fuzzy self.assertGreaterEqual(len(result.output.split('\n\n')) - 1, len(formats)) def test_list_formats_strict(self): formats = list(self.pm.exportable_formats)[:5] result = self.runner.invoke(tools, ['list-formats', '--exportable', '--strict', *formats]) self.assertEqual(result.exit_code, 0) self.assertEqual(len(result.output.split('\n\n')) - 1, len(formats)) result = self.runner.invoke(tools, ['list-formats', '--exportable', '--strict', formats[0] + 'x']) self.assertEqual(result.exit_code, 0) self.assertEqual(len(result.output), 0) result = self.runner.invoke(tools, ['list-formats', '--exportable', '--strict', *formats, formats[0] + 'x']) self.assertEqual(result.exit_code, 0) self.assertEqual(len(result.output.split('\n\n')) - 1, len(formats)) def test_list_formats_tsv(self): result = self.runner.invoke(tools, ['list-formats', '--importable', '--tsv']) self.assertEqual(result.exit_code, 0) # len - 1 because \n split produces a final '' self.assertEqual(len(result.output.split('\n')) - 1, len(self.pm.importable_formats)) no_description_count = 0 for name, format_record in self.pm.importable_formats.items(): self.assertIn(name, result.output) docstring = format_record.format.__doc__ if docstring: description = docstring.split('\n\n')[0].strip() for word in description: self.assertIn(word.strip(), result.output) if format_record.format.__doc__ is None: no_description_count += 1 self.assertEqual(no_description_count, result.output.count('\t\n')) class TestReplay(unittest.TestCase): def setUp(self): self.runner = CliRunner() self.pm = PluginManager() self.dp = self.pm.plugins['dummy-plugin'] self.tempdir = tempfile.mkdtemp(prefix='q2cli-test-replay-temp-') # contrive artifacts with different sorts of provenance int_seq1 = Artifact.import_data('IntSequence1', [1, 2, 3]) int_seq2 = Artifact.import_data('IntSequence1', [4, 5, 6]) int_seq3 = Artifact.import_data('IntSequence2', [7, 8]) concat_ints = self.dp.actions['concatenate_ints'] concated_ints, = concat_ints(int_seq1, int_seq2, int_seq3, 9, 0) concated_ints.save(os.path.join(self.tempdir, 'concated_ints.qza')) outer_dir = os.path.join(self.tempdir, 'outer_dir') inner_dir = os.path.join(self.tempdir, 'outer_dir', 'inner_dir') os.mkdir(outer_dir) os.mkdir(inner_dir) shutil.copy(os.path.join(self.tempdir, 'concated_ints.qza'), outer_dir) int_seq = Artifact.import_data('IntSequence1', [1, 2, 3, 4]) left_ints, _ = self.dp.actions['split_ints'](int_seq) left_ints.save(os.path.join(inner_dir, 'left_ints.qza')) mapping = Artifact.import_data('Mapping', {'qiime': 2, 'triangle': 3}) int_seq_with_md, = self.dp.actions['identity_with_metadata']( int_seq1, mapping.view(Metadata)) int_seq_with_md.save(os.path.join(self.tempdir, 'int_seq_with_md.qza')) def tearDown(self): shutil.rmtree(self.tempdir) def test_replay_provenance(self): in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(self.tempdir, 'rendered.txt') result = self.runner.invoke( tools, ['replay-provenance', '--in-fp', in_fp, '--out-fp', out_fp] ) self.assertEqual(result.exit_code, 0) with open(out_fp, 'r') as fh: rendered = fh.read() self.assertIn('qiime tools import', rendered) self.assertIn('--type \'IntSequence1\'', rendered) self.assertIn('--type \'IntSequence2\'', rendered) self.assertIn('--input-path ', rendered) self.assertIn('--output-path int-sequence1-0.qza', rendered) self.assertIn('--output-path int-sequence1-1.qza', rendered) self.assertIn('--output-path int-sequence2-0.qza', rendered) self.assertIn('qiime dummy-plugin concatenate-ints', rendered) self.assertRegex(rendered, '--i-ints[12] int-sequence1-0.qza') self.assertRegex(rendered, '--i-ints[12] int-sequence1-1.qza') self.assertIn('--i-ints3 int-sequence2-0.qza', rendered) self.assertIn('--p-int1 9', rendered) self.assertIn('--p-int2 0', rendered) self.assertIn('--o-concatenated-ints concatenated-ints-0.qza', rendered) def test_replay_provenance_python(self): in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(self.tempdir, 'rendered.txt') result = self.runner.invoke( tools, ['replay-provenance', '--in-fp', in_fp, '--out-fp', out_fp, '--usage-driver', 'python3'] ) self.assertEqual(result.exit_code, 0) with open(out_fp, 'r') as fh: rendered = fh.read() self.assertIn('from qiime2 import Artifact', rendered) self.assertIn('Artifact.import_data', rendered) self.assertIn('dummy_plugin_actions.concatenate_ints', rendered) def test_replay_provenance_recurse(self): """ If the directory is parsed recursively, both the concated_ints.qza and left_ints.qza will be captured. """ in_fp = os.path.join(self.tempdir, 'outer_dir') out_fp = os.path.join(self.tempdir, 'rendered.txt') result = self.runner.invoke( tools, ['replay-provenance', '--in-fp', in_fp, '--out-fp', out_fp, '--usage-driver', 'python3', '--recurse'] ) self.assertEqual(result.exit_code, 0) with open(out_fp, 'r') as fh: rendered = fh.read() self.assertIn('dummy_plugin_actions.concatenate_ints', rendered) self.assertIn('dummy_plugin_actions.split_ints', rendered) def test_replay_provenance_use_md_without_parse(self): in_fp = os.path.join(self.tempdir, 'outer_dir') out_fp = os.path.join(self.tempdir, 'rendered.txt') result = self.runner.invoke( tools, ['replay-provenance', '--in-fp', in_fp, '--out-fp', out_fp, '--no-parse-metadata', '--use-recorded-metadata'] ) self.assertEqual(result.exit_code, 1) self.assertIsInstance(result.exception, ValueError) self.assertRegex(str(result.exception), 'Metadata not parsed for replay') @patch('qiime2.sdk.util.get_available_usage_drivers', return_value={'cli': ReplayCLIUsage}) def test_replay_provenance_usage_driver_not_available(self, patch): in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(self.tempdir, 'rendered.txt') result = self.runner.invoke( tools, ['replay-provenance', '--in-fp', in_fp, '--out-fp', out_fp, '--usage-driver', 'python3'] ) self.assertEqual(result.exit_code, 1) self.assertIsInstance(result.exception, ValueError) self.assertIn( 'python3 usage driver is not available', str(result.exception) ) def test_replay_citations(self): in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(self.tempdir, 'citations.bib') result = self.runner.invoke( tools, ['replay-citations', '--in-fp', in_fp, '--out-fp', out_fp] ) self.assertEqual(result.exit_code, 0) with open(out_fp) as fh: bib_database = bp.load(fh) # use .*? to non-greedily match version strings exp = [ r'action\|dummy-plugin:.*?\|method:concatenate_ints\|0', r'framework\|qiime2:.*?\|0', r'plugin\|dummy-plugin:.*?\|0', r'plugin\|dummy-plugin:.*?\|1', r'transformer\|dummy-plugin:.*?\|builtins:list->' r'IntSequenceDirectoryFormat\|0', r'transformer\|dummy-plugin:.*?\|builtins:list->' r'IntSequenceV2DirectoryFormat\|4', r'transformer\|dummy-plugin:.*?\|builtins:list->' r'IntSequenceV2DirectoryFormat\|5', r'transformer\|dummy-plugin:.*?\|builtins:list->' r'IntSequenceV2DirectoryFormat|6', r'transformer\|dummy-plugin:.*?\|builtins:list->' r'IntSequenceV2DirectoryFormat\|8', r'view\|dummy-plugin:.*?\|IntSequenceDirectoryFormat\|0' ] self.assertEqual(len(exp), len(bib_database.entries)) all_records_str = '' for record in bib_database.entries_dict.keys(): all_records_str += f' {record}' for record in exp: self.assertRegex(all_records_str, record) def test_replay_citations_no_deduplicate(self): in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(self.tempdir, 'citations.bib') result = self.runner.invoke( tools, ['replay-citations', '--in-fp', in_fp, '--out-fp', out_fp, '--no-deduplicate'] ) self.assertEqual(result.exit_code, 0) with open(out_fp) as fh: bib_database = bp.load(fh) self.assertEqual(28, len(bib_database.entries)) with open(out_fp) as fh: file_contents = fh.read() framework_citations = \ re.compile(r'framework\|qiime2:.*?\|0.*' * 4, re.DOTALL) self.assertRegex(file_contents, framework_citations) def test_replay_supplement(self): in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(self.tempdir, 'supplement.zip') result = self.runner.invoke( tools, ['replay-supplement', '--in-fp', in_fp, '--out-fp', out_fp] ) self.assertEqual(result.exit_code, 0) self.assertTrue(zipfile.is_zipfile(out_fp)) exp = { 'supplement/', 'supplement/python3_replay.py', 'supplement/cli_replay.sh', 'supplement/citations.bib' } with zipfile.ZipFile(out_fp, 'r') as zfh: self.assertEqual(exp, set(zfh.namelist())) def test_replay_supplement_with_metadata(self): in_fp = os.path.join(self.tempdir, 'int_seq_with_md.qza') out_fp = os.path.join(self.tempdir, 'supplement.zip') result = self.runner.invoke( tools, ['replay-supplement', '--in-fp', in_fp, '--out-fp', out_fp] ) self.assertEqual(result.exit_code, 0) self.assertTrue(zipfile.is_zipfile(out_fp)) exp = { 'supplement/', 'supplement/python3_replay.py', 'supplement/cli_replay.sh', 'supplement/citations.bib', 'supplement/recorded_metadata/', 'supplement/recorded_metadata/' 'dummy_plugin_identity_with_metadata_0/', 'supplement/recorded_metadata/' 'dummy_plugin_identity_with_metadata_0/metadata_0.tsv', } with zipfile.ZipFile(out_fp, 'r') as zfh: self.assertEqual(exp, set(zfh.namelist())) def test_replay_supplement_no_metadata_dump(self): in_fp = os.path.join(self.tempdir, 'int_seq_with_md.qza') out_fp = os.path.join(self.tempdir, 'supplement.zip') result = self.runner.invoke( tools, ['replay-supplement', '--in-fp', in_fp, '--out-fp', out_fp, '--no-dump-recorded-metadata'] ) self.assertEqual(result.exit_code, 0) self.assertTrue(zipfile.is_zipfile(out_fp)) not_exp = 'recorded_metadata/' with zipfile.ZipFile(out_fp, 'r') as zfh: self.assertNotIn(not_exp, set(zfh.namelist())) @patch('qiime2.sdk.util.get_available_usage_drivers', return_value={}) def test_replay_supplement_usage_driver_not_available(self, patch): in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(self.tempdir, 'rendered.txt') result = self.runner.invoke( tools, ['replay-supplement', '--in-fp', in_fp, '--out-fp', out_fp] ) self.assertEqual(result.exit_code, 1) self.assertIsInstance(result.exception, ValueError) self.assertIn( 'no available usage drivers', str(result.exception) ) def test_replay_supplement_zipfile(self): with tempfile.TemporaryDirectory() as tempdir: in_fp = os.path.join(self.tempdir, 'concated_ints.qza') out_fp = os.path.join(tempdir, 'supplement.zip') result = self.runner.invoke( tools, ['replay-supplement', '--in-fp', in_fp, '--out-fp', out_fp] ) self.assertEqual(result.exit_code, 0) self.assertTrue(zipfile.is_zipfile(out_fp)) unzipped_path = os.path.join(tempdir, 'extracted') os.makedirs(unzipped_path) with zipfile.ZipFile(out_fp, 'r') as zfh: zfh.extractall(unzipped_path) self.assertEqual(os.listdir(unzipped_path), ['supplement']) if __name__ == "__main__": unittest.main() q2cli-2024.5.0/q2cli/tests/test_usage.py000066400000000000000000000142421462552630000176550ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- import os import sys import subprocess import tempfile import unittest from q2cli.core.usage import CLIUsage from qiime2.core.testing.util import get_dummy_plugin import pytest def _rt_labeler(val): if hasattr(val, 'id'): return val.id return val @pytest.fixture def dummy_plugin(monkeypatch): monkeypatch.setenv('QIIMETEST', '') return get_dummy_plugin() def get_templated_tests(): return [ ('concatenate_ints', """\ # This example demonstrates basic usage. qiime dummy-plugin concatenate-ints \\ --i-ints1 ints-a.qza \\ --i-ints2 ints-b.qza \\ --i-ints3 ints-c.qza \\ --p-int1 4 \\ --p-int2 2 \\ --o-concatenated-ints ints-d.qza # This example demonstrates chained usage (pt 1). qiime dummy-plugin concatenate-ints \\ --i-ints1 ints-a.qza \\ --i-ints2 ints-b.qza \\ --i-ints3 ints-c.qza \\ --p-int1 4 \\ --p-int2 2 \\ --o-concatenated-ints ints-d.qza # This example demonstrates chained usage (pt 2). qiime dummy-plugin concatenate-ints \\ --i-ints1 ints-d.qza \\ --i-ints2 ints-b.qza \\ --i-ints3 ints-c.qza \\ --p-int1 41 \\ --p-int2 0 \\ --o-concatenated-ints concatenated-ints.qza # comment 1 # comment 2 # comment 1 # comment 2"""), ('identity_with_metadata', """\ qiime dummy-plugin identity-with-metadata \\ --i-ints ints.qza \\ --m-metadata-file md.tsv \\ --o-out out.qza qiime dummy-plugin identity-with-metadata \\ --i-ints ints.qza \\ --m-metadata-file md1.tsv md2.tsv \\ --o-out out.qza"""), ('identity_with_metadata_column', """\ qiime dummy-plugin identity-with-metadata-column \\ --i-ints ints.qza \\ --m-metadata-file md.tsv \\ --m-metadata-column a \\ --o-out out.qza"""), ('typical_pipeline', """\ qiime dummy-plugin typical-pipeline \\ --i-int-sequence ints.qza \\ --i-mapping mapper.qza \\ --p-do-extra-thing \\ --o-out-map out-map.qza \\ --o-left left.qza \\ --o-right right.qza \\ --o-left-viz left-viz.qzv \\ --o-right-viz right-viz.qzv qiime dummy-plugin typical-pipeline \\ --i-int-sequence ints1.qza \\ --i-mapping mapper1.qza \\ --p-do-extra-thing \\ --o-out-map out-map1.qza \\ --o-left left1.qza \\ --o-right right1.qza \\ --o-left-viz left-viz1.qzv \\ --o-right-viz right-viz1.qzv qiime dummy-plugin typical-pipeline \\ --i-int-sequence left1.qza \\ --i-mapping out-map1.qza \\ --p-no-do-extra-thing \\ --o-out-map out-map2.qza \\ --o-left left2.qza \\ --o-right right2.qza \\ --o-left-viz left-viz2.qzv \\ --o-right-viz right-viz2.qzv qiime dev assert-result-data right2.qza \\ --zip-data-path ints.txt \\ --expression 1 qiime dev assert-result-type right2.qza \\ --qiime-type IntSequence1 qiime dev assert-result-type out-map1.qza \\ --qiime-type Mapping"""), ('optional_artifacts_method', """\ qiime dummy-plugin optional-artifacts-method \\ --i-ints ints.qza \\ --p-num1 1 \\ --o-output output1.qza qiime dummy-plugin optional-artifacts-method \\ --i-ints ints.qza \\ --p-num1 1 \\ --p-num2 2 \\ --o-output output2.qza qiime dummy-plugin optional-artifacts-method \\ --i-ints ints.qza \\ --p-num1 1 \\ --o-output output3.qza qiime dummy-plugin optional-artifacts-method \\ --i-ints ints.qza \\ --i-optional1 output3.qza \\ --p-num1 3 \\ --p-num2 4 \\ --o-output output4.qza"""), ('variadic_input_method', """\ qiime dummy-plugin variadic-input-method \\ --i-ints ints-a.qza ints-b.qza \\ --i-int-set single-int1.qza single-int2.qza \\ --p-nums 7 8 9 \\ --o-output out.qza"""), ('list_of_ints', """\ qiime dummy-plugin list-of-ints \\ --i-ints ints/ \\ --o-output out/"""), ] _templ_ids = [x[0] for x in get_templated_tests()] @pytest.mark.parametrize('action,exp', get_templated_tests(), ids=_templ_ids) def test_templated(dummy_plugin, action, exp): action = dummy_plugin.actions[action] obs = '' for example_f in action.examples.values(): use = CLIUsage(enable_assertions=True) example_f(use) obs += use.render() obs += '\n' # trim final newline obs = obs[:-1] assert exp == obs def get_rt_tests(): tests = [] try: plugin = get_dummy_plugin() except RuntimeError: return tests for action in plugin.actions.values(): for name in action.examples: tests.append((action, name)) return tests @pytest.mark.parametrize('action,example', get_rt_tests(), ids=_rt_labeler) def test_round_trip(action, example): example_f = action.examples[example] use = CLIUsage(enable_assertions=True) example_f(use) rendered = use.render() if sys.platform.startswith('linux'): # TODO: remove me when arrays are not used in shell extra = dict(executable='/bin/bash') else: extra = dict() with tempfile.TemporaryDirectory() as tmpdir: for ref, data in use.get_example_data(): data.save(os.path.join(tmpdir, ref)) subprocess.run(rendered, shell=True, check=True, cwd=tmpdir, env={**os.environ}, **extra) class ReplayResultCollectionTests(unittest.TestCase): @classmethod def setUpClass(cls): cls.plugin = get_dummy_plugin() def test_construct_and_access_collection(self): action = self.plugin.actions['dict_of_ints'] use = CLIUsage() action.examples['construct_and_access_collection'](use) exp = """\ ## constructing result collection ## rc_name=rc-in/ ext=.qza keys=( a b ) names=( ints-a.qza ints-b.qza ) construct_result_collection ## qiime dummy-plugin dict-of-ints \\ --i-ints rc-in/ \\ --o-output rc-out/ ## accessing result collection member ## ln -s rc-out/b.qza ints-b-from-collection.qza ##""" self.assertEqual(exp, use.render()) q2cli-2024.5.0/q2cli/util.py000066400000000000000000000376421462552630000153360ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- class OutOfDisk(Exception): pass def get_app_dir(): import os conda_prefix = os.environ.get('CONDA_PREFIX') if conda_prefix is not None and os.access(conda_prefix, os.W_OK | os.X_OK): return os.path.join(conda_prefix, 'var', 'q2cli') else: import click return click.get_app_dir('q2cli', roaming=False) # NOTE: `get_cache_dir` and `get_completion_path` live here instead of # `q2cli.cache` because `q2cli.cache` can be slow to import. # `get_completion_path` (which relies on `get_cache_dir`) is imported and # executed by the Bash completion function each time the user hits , so it # must be quick to import. def get_cache_dir(): import os.path return os.path.join(get_app_dir(), 'cache') def get_completion_path(): import os.path return os.path.join(get_cache_dir(), 'completion.sh') def hidden_to_cli_name(name): # Safety first if not name.startswith('_'): raise ValueError(f"The name '{name}' does not start with '_' meaning" " it is not a hidden action and this method should" " not have been called on it.") name = to_cli_name(name) # Retain the leading _ return name.replace('-', '_', 1) def to_cli_name(name): return name.replace('_', '-') def to_snake_case(name): return name.replace('-', '_') def exit_with_error(e, header='An error has been encountered:', traceback='stderr', status=1): import sys import traceback as tb import textwrap import click from q2cli.core.config import CONFIG footer = [] # footer only exists if traceback is set tb_file = None if traceback == 'stderr': tb_file = sys.stderr footer = ['See above for debug info.'] elif traceback is not None: tb_file = traceback footer = ['Debug info has been saved to %s' % tb_file.name] error = textwrap.indent(str(e), ' ') segments = [header, error] + footer if traceback is not None: tb.print_exception(type(e), e, e.__traceback__, file=tb_file) tb_file.write('\n') click.echo(CONFIG.cfg_style('error', '\n\n'.join(segments)), err=True) if not footer: click.echo(err=True) # extra newline to look normal try: click.get_current_context().exit(status) except RuntimeError: sys.exit(status) def output_in_cache(fp): """Determines if an output path follows the format /path_to_extant_cache:key """ from qiime2.core.cache import Cache # Tells us right away this isn't in a cache if ':' not in fp: return False cache_path, key = _get_cache_path_and_key(fp) try: if Cache.is_cache(cache_path): Cache.validate_key(key) return True except FileNotFoundError as e: # If cache_path doesn't exist, don't treat this as a cache output if 'No such file or directory' in str(e): pass else: raise e # We don't have a cache at all return False def get_close_matches(name, possibilities): import difflib name = name.lower() # bash completion makes an incomplete arg most likely matches = [m for m in possibilities if m.startswith(name)] if not matches: # otherwise, it may be misspelled matches = difflib.get_close_matches(name, possibilities, cutoff=0.8) matches.sort() if len(matches) > 5: # this is probably a good time to look at --help matches = matches[:4] + ['...'] return matches class pretty_failure: def __init__(self, header='An error has been encountered:', traceback='stderr', status=1): self.header = header self.traceback = traceback self.status = status def __call__(self, function): def wrapped(*args, **kwargs): with self: return function(*args, **kwargs, failure=self) # not using functools.wraps to keep import overhead low # click only seems to need the __name__ wrapped.__name__ = function.__name__ return wrapped def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): # if exit_with_error is called twice, then click.exit(1) or sys.exit(1) # will happen, no need to exit_with_error again in that case. if exc_val is not None and str(exc_val) != '1': exit_with_error(exc_val, self.header, self.traceback, self.status) return False def convert_primitive(ast): import click mapping = { 'Int': int, 'Str': str, 'Float': float, 'Color': str, 'Bool': bool } # TODO: it would be a good idea to refactor this someday, but until then # just handle the few predicates we know about. predicate = ast['predicate'] if predicate: if predicate['name'] == 'Choices' and ast['name'] == 'Str': return click.Choice(predicate['choices']) elif predicate['name'] == 'Range' and ast['name'] == 'Int': start = predicate['range'][0] end = predicate['range'][1] # click.IntRange is always inclusive if start is not None and not predicate['inclusive'][0]: start += 1 if end is not None and not predicate['inclusive'][1]: end -= 1 return click.IntRange(start, end) elif predicate['name'] == 'Range' and ast['name'] == 'Float': # click.FloatRange will be in click 7.0, so for now the # range handling will just fallback to qiime2. return mapping['Float'] else: raise NotImplementedError() else: return mapping[ast['name']] def citations_option(get_citation_records): import click from q2cli.core.config import CONFIG def callback(ctx, param, value): if not value or ctx.resilient_parsing: return records = get_citation_records() if records: import io import qiime2.sdk citations = qiime2.sdk.Citations( [('key%d' % i, r) for i, r in enumerate(records)]) with io.StringIO() as fh: fh.write('% use `qiime tools citations` on a QIIME 2 result' ' for complete list\n\n') citations.save(fh) click.echo(fh.getvalue(), nl=False) ctx.exit() else: click.echo( CONFIG.cfg_style('problem', 'No citations found.'), err=True) ctx.exit(1) return click.Option(['--citations'], is_flag=True, expose_value=False, is_eager=True, callback=callback, help='Show citations and exit.') def example_data_option(get_plugin, action_name=None): import click from q2cli.click.type import OutDirType def callback(ctx, param, value): if not value or ctx.resilient_parsing: return else: import q2cli.core.usage as usage plugin = get_plugin() if action_name is not None: action = plugin.actions[action_name] generator = usage.write_example_data(action, value) else: generator = usage.write_plugin_example_data(plugin, value) ran = False for hint, path in generator: click.secho('Saved %s to: %s' % (hint, path), fg='green') ran = True if ran: ctx.exit() else: click.secho('No example data found.', fg='yellow', err=True) ctx.exit(1) return click.Option(['--example-data'], type=OutDirType(), is_eager=True, expose_value=False, callback=callback, help='Write example data and exit.') def get_plugin_manager(): import qiime2.sdk try: return qiime2.sdk.PluginManager.reuse_existing() except qiime2.sdk.UninitializedPluginManagerError: import os if 'MYSTERY_STEW' in os.environ: from q2_mystery_stew.plugin_setup import create_plugin the_stew = create_plugin() pm = qiime2.sdk.PluginManager(add_plugins=False) pm.add_plugin(the_stew) return pm return qiime2.sdk.PluginManager() def load_metadata(fp): import qiime2 import sys metadata, error = _load_metadata_artifact(fp) if metadata is None: try: metadata = qiime2.Metadata.load(fp) except Exception as e: if error and ':' in fp: e = error header = ("There was an issue with loading the file %s as " "metadata:" % fp) tb = 'stderr' if '--verbose' in sys.argv else None exit_with_error(e, header=header, traceback=tb) return metadata def _load_metadata_artifact(fp): import qiime2 import sys artifact, error = _load_input(fp) artifact = artifact[1] if isinstance(error, OutOfDisk): raise error default_tb = 'stderr' # if that worked, we have an artifact or we've # already raised a critical error # otherwise, any normal errors can be ignored as its # most likely actually metadata not a qza if artifact: try: default_tb = None if isinstance(artifact, qiime2.Visualization): raise Exception( 'Visualizations cannot be viewed as QIIME 2 metadata.') if not artifact.has_metadata(): raise Exception( f"Artifacts with type {artifact.type!r} cannot be viewed" " as QIIME 2 metadata.") default_tb = 'stderr' return artifact.view(qiime2.Metadata), None except Exception as e: header = ("There was an issue with viewing the artifact " f"{fp!r} as QIIME 2 Metadata:") tb = 'stderr' if '--verbose' in sys.argv else default_tb exit_with_error(e, header=header, traceback=tb) else: return None, error def _load_input(fp, view=False): # Just initialize the plugin manager. This is slow and not necessary if we # called this from qiime tools view. import os key = None if not view: _ = get_plugin_manager() # We are loading a collection from outside of a cache. This cannot be keyed if os.path.isdir(fp): if len(os.listdir(fp)) == 0: raise ValueError(f"Provided directory '{fp}' is empty.") artifact, error = _load_collection(fp) # We may be loading something from a cache with or without and additional # key, or we may be loading a piece of data from outside of a cache with a # key. We could also be loading a normal unkeyed artifact with a : in its # path elif ':' in fp: # First we assume this is just a weird filepath artifact, _ = _load_input_file(fp) # Then we check if it is a key:path if artifact is None: key, new_fp = _get_path_and_collection_key(fp) artifact, _ = _load_input_file(new_fp) # If we still have nothing if artifact is None: key = None # We assume this is a cache:key. We keep this error because we # assume if they had a : in their path they were trying to load # something from a cache artifact, error = _load_input_cache(fp) if error: # Then we check if it is a key:cache:key key, new_fp = _get_path_and_collection_key(fp) artifact, _ = _load_input_cache(new_fp) # If we ended up with an artifact, we disregard our error if artifact is not None: error = None # We are just loading a normal artifact on disk without silly colons in the # filepath else: artifact, error = _load_input_file(fp) if isinstance(error, OSError) and error.errno == 28: # abort as there's nothing anyone can do about this from qiime2.core.cache import get_cache path = str(get_cache().path) return None, OutOfDisk(f'There was not enough space left on {path!r} ' f'to use the artifact {fp!r}. (Try ' f'setting $TMPDIR to a directory with more ' f'space, or increasing the size of {path!r})') return (key, artifact), error # NOTE: These load collection functions are now virtually identical to class # methods on qiime2.sdk.Result def _load_collection(fp): import os import warnings order_fp = os.path.join(fp, '.order') if os.path.isfile(order_fp): artifacts, error = _load_ordered_collection(fp, order_fp) else: warnings.warn(f'The directory {fp} does not contain a .order file. ' 'The files will be read into the collection in the ' 'order the filesystem provides them in.') artifacts, error = _load_unordered_collection(fp) return artifacts, error def _load_ordered_collection(fp, order_fp): import os artifacts = {} with open(order_fp) as order_fh: for key in order_fh.read().splitlines(): artifact_path = os.path.join(fp, f'{key}.qza') artifacts[key], error = _load_input_file(artifact_path) if error: return None, error return artifacts, None def _load_unordered_collection(fp): import os artifacts = {} for artifact in os.listdir(fp): artifact_fp = os.path.join(fp, artifact) artifacts[artifact], error = _load_input_file(artifact_fp) if error: return None, error return artifacts, None def _load_input_cache(fp): artifact = error = None try: artifact = try_as_cache_input(fp) except Exception as e: error = e return artifact, error def _load_input_file(fp): import qiime2.sdk from os.path import expanduser # If there is a leading ~ we expand it to be the path to home fp = expanduser(fp) # test if valid peek = None try: peek = qiime2.sdk.Result.peek(fp) except Exception as error: if isinstance(error, SyntaxError): raise error # ideally ValueError: X is not a QIIME archive. # but sometimes SyntaxError or worse return None, error # try to actually load try: artifact = qiime2.sdk.Result.load(fp) return artifact, None except Exception as e: if peek: # abort early as there's nothing else to do raise ValueError( "It looks like you have an Artifact but are missing the" " plugin(s) necessary to load it. Artifact has type" f" {peek.type!r} and format {peek.format!r}") from e else: error = e return None, error def try_as_cache_input(fp): """ Determine if an input is in a cache and load it from the cache if it is """ import os from qiime2 import Cache cache_path, key = _get_cache_path_and_key(fp) # We don't want to invent a new cache on disk here because if their input # exists their cache must also already exist if not os.path.exists(cache_path) or not Cache.is_cache(cache_path): raise ValueError(f"The path {cache_path!r} is not a valid cache.") cache = Cache(cache_path) return cache.load(key) def _get_cache_path_and_key(fp): return fp.rsplit(':', 1) def _get_path_and_collection_key(fp): return fp.split(':', 1) def get_default_recycle_pool(plugin_action): from hashlib import sha1 return f'recycle_{plugin_action}_' \ f'{sha1(plugin_action.encode("utf-8")).hexdigest()}' q2cli-2024.5.0/setup.cfg000066400000000000000000000002321462552630000145770ustar00rootroot00000000000000[versioneer] VCS = git style = pep440 versionfile_source = q2cli/_version.py versionfile_build = q2cli/_version.py tag_prefix = parentdir_prefix = q2cli- q2cli-2024.5.0/setup.py000066400000000000000000000017311462552630000144750ustar00rootroot00000000000000# ---------------------------------------------------------------------------- # Copyright (c) 2016-2023, QIIME 2 development team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file LICENSE, distributed with this software. # ---------------------------------------------------------------------------- from setuptools import setup, find_packages import versioneer setup( name='q2cli', version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), license='BSD-3-Clause', url='https://qiime2.org', packages=find_packages(), include_package_data=True, scripts=['bin/tab-qiime'], entry_points={ 'console_scripts': [ 'qiime=q2cli.__main__:qiime' ], 'qiime2.usage_drivers': [ 'cli=q2cli.core.usage:ReplayCLIUsage' ] }, package_data={ 'q2cli.tests': ['data/*'], 'q2cli.core': ['assets/*'] }, zip_safe=False, ) q2cli-2024.5.0/versioneer.py000066400000000000000000002060221462552630000155160ustar00rootroot00000000000000 # Version: 0.18 # flake8: noqa """The Versioneer - like a rocketeer, but for versions. The Versioneer ============== * like a rocketeer, but for versions! * https://github.com/warner/python-versioneer * Brian Warner * License: Public Domain * Compatible With: python2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, and pypy * [![Latest Version] (https://pypip.in/version/versioneer/badge.svg?style=flat) ](https://pypi.python.org/pypi/versioneer/) * [![Build Status] (https://travis-ci.org/warner/python-versioneer.png?branch=master) ](https://travis-ci.org/warner/python-versioneer) This is a tool for managing a recorded version number in distutils-based python projects. The goal is to remove the tedious and error-prone "update the embedded version string" step from your release process. Making a new release should be as easy as recording a new tag in your version-control system, and maybe making new tarballs. ## Quick Install * `pip install versioneer` to somewhere to your $PATH * add a `[versioneer]` section to your setup.cfg (see below) * run `versioneer install` in your source tree, commit the results ## Version Identifiers Source trees come from a variety of places: * a version-control system checkout (mostly used by developers) * a nightly tarball, produced by build automation * a snapshot tarball, produced by a web-based VCS browser, like github's "tarball from tag" feature * a release tarball, produced by "setup.py sdist", distributed through PyPI Within each source tree, the version identifier (either a string or a number, this tool is format-agnostic) can come from a variety of places: * ask the VCS tool itself, e.g. "git describe" (for checkouts), which knows about recent "tags" and an absolute revision-id * the name of the directory into which the tarball was unpacked * an expanded VCS keyword ($Id$, etc) * a `_version.py` created by some earlier build step For released software, the version identifier is closely related to a VCS tag. Some projects use tag names that include more than just the version string (e.g. "myproject-1.2" instead of just "1.2"), in which case the tool needs to strip the tag prefix to extract the version identifier. For unreleased software (between tags), the version identifier should provide enough information to help developers recreate the same tree, while also giving them an idea of roughly how old the tree is (after version 1.2, before version 1.3). Many VCS systems can report a description that captures this, for example `git describe --tags --dirty --always` reports things like "0.7-1-g574ab98-dirty" to indicate that the checkout is one revision past the 0.7 tag, has a unique revision id of "574ab98", and is "dirty" (it has uncommitted changes. The version identifier is used for multiple purposes: * to allow the module to self-identify its version: `myproject.__version__` * to choose a name and prefix for a 'setup.py sdist' tarball ## Theory of Operation Versioneer works by adding a special `_version.py` file into your source tree, where your `__init__.py` can import it. This `_version.py` knows how to dynamically ask the VCS tool for version information at import time. `_version.py` also contains `$Revision$` markers, and the installation process marks `_version.py` to have this marker rewritten with a tag name during the `git archive` command. As a result, generated tarballs will contain enough information to get the proper version. To allow `setup.py` to compute a version too, a `versioneer.py` is added to the top level of your source tree, next to `setup.py` and the `setup.cfg` that configures it. This overrides several distutils/setuptools commands to compute the version when invoked, and changes `setup.py build` and `setup.py sdist` to replace `_version.py` with a small static file that contains just the generated version data. ## Installation See [INSTALL.md](./INSTALL.md) for detailed installation instructions. ## Version-String Flavors Code which uses Versioneer can learn about its version string at runtime by importing `_version` from your main `__init__.py` file and running the `get_versions()` function. From the "outside" (e.g. in `setup.py`), you can import the top-level `versioneer.py` and run `get_versions()`. Both functions return a dictionary with different flavors of version information: * `['version']`: A condensed version string, rendered using the selected style. This is the most commonly used value for the project's version string. The default "pep440" style yields strings like `0.11`, `0.11+2.g1076c97`, or `0.11+2.g1076c97.dirty`. See the "Styles" section below for alternative styles. * `['full-revisionid']`: detailed revision identifier. For Git, this is the full SHA1 commit id, e.g. "1076c978a8d3cfc70f408fe5974aa6c092c949ac". * `['date']`: Date and time of the latest `HEAD` commit. For Git, it is the commit date in ISO 8601 format. This will be None if the date is not available. * `['dirty']`: a boolean, True if the tree has uncommitted changes. Note that this is only accurate if run in a VCS checkout, otherwise it is likely to be False or None * `['error']`: if the version string could not be computed, this will be set to a string describing the problem, otherwise it will be None. It may be useful to throw an exception in setup.py if this is set, to avoid e.g. creating tarballs with a version string of "unknown". Some variants are more useful than others. Including `full-revisionid` in a bug report should allow developers to reconstruct the exact code being tested (or indicate the presence of local changes that should be shared with the developers). `version` is suitable for display in an "about" box or a CLI `--version` output: it can be easily compared against release notes and lists of bugs fixed in various releases. The installer adds the following text to your `__init__.py` to place a basic version in `YOURPROJECT.__version__`: from ._version import get_versions __version__ = get_versions()['version'] del get_versions ## Styles The setup.cfg `style=` configuration controls how the VCS information is rendered into a version string. The default style, "pep440", produces a PEP440-compliant string, equal to the un-prefixed tag name for actual releases, and containing an additional "local version" section with more detail for in-between builds. For Git, this is TAG[+DISTANCE.gHEX[.dirty]] , using information from `git describe --tags --dirty --always`. For example "0.11+2.g1076c97.dirty" indicates that the tree is like the "1076c97" commit but has uncommitted changes (".dirty"), and that this commit is two revisions ("+2") beyond the "0.11" tag. For released software (exactly equal to a known tag), the identifier will only contain the stripped tag, e.g. "0.11". Other styles are available. See [details.md](details.md) in the Versioneer source tree for descriptions. ## Debugging Versioneer tries to avoid fatal errors: if something goes wrong, it will tend to return a version of "0+unknown". To investigate the problem, run `setup.py version`, which will run the version-lookup code in a verbose mode, and will display the full contents of `get_versions()` (including the `error` string, which may help identify what went wrong). ## Known Limitations Some situations are known to cause problems for Versioneer. This details the most significant ones. More can be found on Github [issues page](https://github.com/warner/python-versioneer/issues). ### Subprojects Versioneer has limited support for source trees in which `setup.py` is not in the root directory (e.g. `setup.py` and `.git/` are *not* siblings). The are two common reasons why `setup.py` might not be in the root: * Source trees which contain multiple subprojects, such as [Buildbot](https://github.com/buildbot/buildbot), which contains both "master" and "slave" subprojects, each with their own `setup.py`, `setup.cfg`, and `tox.ini`. Projects like these produce multiple PyPI distributions (and upload multiple independently-installable tarballs). * Source trees whose main purpose is to contain a C library, but which also provide bindings to Python (and perhaps other langauges) in subdirectories. Versioneer will look for `.git` in parent directories, and most operations should get the right version string. However `pip` and `setuptools` have bugs and implementation details which frequently cause `pip install .` from a subproject directory to fail to find a correct version string (so it usually defaults to `0+unknown`). `pip install --editable .` should work correctly. `setup.py install` might work too. Pip-8.1.1 is known to have this problem, but hopefully it will get fixed in some later version. [Bug #38](https://github.com/warner/python-versioneer/issues/38) is tracking this issue. The discussion in [PR #61](https://github.com/warner/python-versioneer/pull/61) describes the issue from the Versioneer side in more detail. [pip PR#3176](https://github.com/pypa/pip/pull/3176) and [pip PR#3615](https://github.com/pypa/pip/pull/3615) contain work to improve pip to let Versioneer work correctly. Versioneer-0.16 and earlier only looked for a `.git` directory next to the `setup.cfg`, so subprojects were completely unsupported with those releases. ### Editable installs with setuptools <= 18.5 `setup.py develop` and `pip install --editable .` allow you to install a project into a virtualenv once, then continue editing the source code (and test) without re-installing after every change. "Entry-point scripts" (`setup(entry_points={"console_scripts": ..})`) are a convenient way to specify executable scripts that should be installed along with the python package. These both work as expected when using modern setuptools. When using setuptools-18.5 or earlier, however, certain operations will cause `pkg_resources.DistributionNotFound` errors when running the entrypoint script, which must be resolved by re-installing the package. This happens when the install happens with one version, then the egg_info data is regenerated while a different version is checked out. Many setup.py commands cause egg_info to be rebuilt (including `sdist`, `wheel`, and installing into a different virtualenv), so this can be surprising. [Bug #83](https://github.com/warner/python-versioneer/issues/83) describes this one, but upgrading to a newer version of setuptools should probably resolve it. ### Unicode version strings While Versioneer works (and is continually tested) with both Python 2 and Python 3, it is not entirely consistent with bytes-vs-unicode distinctions. Newer releases probably generate unicode version strings on py2. It's not clear that this is wrong, but it may be surprising for applications when then write these strings to a network connection or include them in bytes-oriented APIs like cryptographic checksums. [Bug #71](https://github.com/warner/python-versioneer/issues/71) investigates this question. ## Updating Versioneer To upgrade your project to a new release of Versioneer, do the following: * install the new Versioneer (`pip install -U versioneer` or equivalent) * edit `setup.cfg`, if necessary, to include any new configuration settings indicated by the release notes. See [UPGRADING](./UPGRADING.md) for details. * re-run `versioneer install` in your source tree, to replace `SRC/_version.py` * commit any changed files ## Future Directions This tool is designed to make it easily extended to other version-control systems: all VCS-specific components are in separate directories like src/git/ . The top-level `versioneer.py` script is assembled from these components by running make-versioneer.py . In the future, make-versioneer.py will take a VCS name as an argument, and will construct a version of `versioneer.py` that is specific to the given VCS. It might also take the configuration arguments that are currently provided manually during installation by editing setup.py . Alternatively, it might go the other direction and include code from all supported VCS systems, reducing the number of intermediate scripts. ## License To make Versioneer easier to embed, all its code is dedicated to the public domain. The `_version.py` that it creates is also in the public domain. Specifically, both are released under the Creative Commons "Public Domain Dedication" license (CC0-1.0), as described in https://creativecommons.org/publicdomain/zero/1.0/ . """ from __future__ import print_function try: import configparser except ImportError: import ConfigParser as configparser import errno import json import os import re import subprocess import sys class VersioneerConfig: """Container for Versioneer configuration parameters.""" def get_root(): """Get the project root directory. We require that all commands are run from the project root, i.e. the directory that contains setup.py, setup.cfg, and versioneer.py . """ root = os.path.realpath(os.path.abspath(os.getcwd())) setup_py = os.path.join(root, "setup.py") versioneer_py = os.path.join(root, "versioneer.py") if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): # allow 'python path/to/setup.py COMMAND' root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0]))) setup_py = os.path.join(root, "setup.py") versioneer_py = os.path.join(root, "versioneer.py") if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): err = ("Versioneer was unable to run the project root directory. " "Versioneer requires setup.py to be executed from " "its immediate directory (like 'python setup.py COMMAND'), " "or in a way that lets it use sys.argv[0] to find the root " "(like 'python path/to/setup.py COMMAND').") raise VersioneerBadRootError(err) try: # Certain runtime workflows (setup.py install/develop in a setuptools # tree) execute all dependencies in a single python process, so # "versioneer" may be imported multiple times, and python's shared # module-import table will cache the first one. So we can't use # os.path.dirname(__file__), as that will find whichever # versioneer.py was first imported, even in later projects. me = os.path.realpath(os.path.abspath(__file__)) me_dir = os.path.normcase(os.path.splitext(me)[0]) vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0]) if me_dir != vsr_dir: print("Warning: build in %s is using versioneer.py from %s" % (os.path.dirname(me), versioneer_py)) except NameError: pass return root def get_config_from_root(root): """Read the project setup.cfg file to determine Versioneer config.""" # This might raise EnvironmentError (if setup.cfg is missing), or # configparser.NoSectionError (if it lacks a [versioneer] section), or # configparser.NoOptionError (if it lacks "VCS="). See the docstring at # the top of versioneer.py for instructions on writing your setup.cfg . setup_cfg = os.path.join(root, "setup.cfg") parser = configparser.SafeConfigParser() with open(setup_cfg, "r") as f: parser.readfp(f) VCS = parser.get("versioneer", "VCS") # mandatory def get(parser, name): if parser.has_option("versioneer", name): return parser.get("versioneer", name) return None cfg = VersioneerConfig() cfg.VCS = VCS cfg.style = get(parser, "style") or "" cfg.versionfile_source = get(parser, "versionfile_source") cfg.versionfile_build = get(parser, "versionfile_build") cfg.tag_prefix = get(parser, "tag_prefix") if cfg.tag_prefix in ("''", '""'): cfg.tag_prefix = "" cfg.parentdir_prefix = get(parser, "parentdir_prefix") cfg.verbose = get(parser, "verbose") return cfg class NotThisMethod(Exception): """Exception raised if a method is not valid for the current scenario.""" # these dictionaries contain VCS-specific tools LONG_VERSION_PY = {} HANDLERS = {} def register_vcs_handler(vcs, method): # decorator """Decorator to mark a method as the handler for a particular VCS.""" def decorate(f): """Store f in HANDLERS[vcs][method].""" if vcs not in HANDLERS: HANDLERS[vcs] = {} HANDLERS[vcs][method] = f return f return decorate def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None): """Call the given command(s).""" assert isinstance(commands, list) p = None for c in commands: try: dispcmd = str([c] + args) # remember shell=False, so use git.cmd on windows, not just git p = subprocess.Popen([c] + args, cwd=cwd, env=env, stdout=subprocess.PIPE, stderr=(subprocess.PIPE if hide_stderr else None)) break except EnvironmentError: e = sys.exc_info()[1] if e.errno == errno.ENOENT: continue if verbose: print("unable to run %s" % dispcmd) print(e) return None, None else: if verbose: print("unable to find command, tried %s" % (commands,)) return None, None stdout = p.communicate()[0].strip() if sys.version_info[0] >= 3: stdout = stdout.decode() if p.returncode != 0: if verbose: print("unable to run %s (error)" % dispcmd) print("stdout was %s" % stdout) return None, p.returncode return stdout, p.returncode LONG_VERSION_PY['git'] = ''' # This file helps to compute a version number in source trees obtained from # git-archive tarball (such as those provided by githubs download-from-tag # feature). Distribution tarballs (built by setup.py sdist) and build # directories (produced by setup.py build) will contain a much shorter file # that just contains the computed version number. # This file is released into the public domain. Generated by # versioneer-0.18 (https://github.com/warner/python-versioneer) """Git implementation of _version.py.""" import errno import os import re import subprocess import sys def get_keywords(): """Get the keywords needed to look up the version information.""" # these strings will be replaced by git during git-archive. # setup.py/versioneer.py will grep for the variable names, so they must # each be defined on a line of their own. _version.py will just call # get_keywords(). git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s" git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s" git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s" keywords = {"refnames": git_refnames, "full": git_full, "date": git_date} return keywords class VersioneerConfig: """Container for Versioneer configuration parameters.""" def get_config(): """Create, populate and return the VersioneerConfig() object.""" # these strings are filled in when 'setup.py versioneer' creates # _version.py cfg = VersioneerConfig() cfg.VCS = "git" cfg.style = "%(STYLE)s" cfg.tag_prefix = "%(TAG_PREFIX)s" cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s" cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s" cfg.verbose = False return cfg class NotThisMethod(Exception): """Exception raised if a method is not valid for the current scenario.""" LONG_VERSION_PY = {} HANDLERS = {} def register_vcs_handler(vcs, method): # decorator """Decorator to mark a method as the handler for a particular VCS.""" def decorate(f): """Store f in HANDLERS[vcs][method].""" if vcs not in HANDLERS: HANDLERS[vcs] = {} HANDLERS[vcs][method] = f return f return decorate def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None): """Call the given command(s).""" assert isinstance(commands, list) p = None for c in commands: try: dispcmd = str([c] + args) # remember shell=False, so use git.cmd on windows, not just git p = subprocess.Popen([c] + args, cwd=cwd, env=env, stdout=subprocess.PIPE, stderr=(subprocess.PIPE if hide_stderr else None)) break except EnvironmentError: e = sys.exc_info()[1] if e.errno == errno.ENOENT: continue if verbose: print("unable to run %%s" %% dispcmd) print(e) return None, None else: if verbose: print("unable to find command, tried %%s" %% (commands,)) return None, None stdout = p.communicate()[0].strip() if sys.version_info[0] >= 3: stdout = stdout.decode() if p.returncode != 0: if verbose: print("unable to run %%s (error)" %% dispcmd) print("stdout was %%s" %% stdout) return None, p.returncode return stdout, p.returncode def versions_from_parentdir(parentdir_prefix, root, verbose): """Try to determine the version from the parent directory name. Source tarballs conventionally unpack into a directory that includes both the project name and a version string. We will also support searching up two directory levels for an appropriately named parent directory """ rootdirs = [] for i in range(3): dirname = os.path.basename(root) if dirname.startswith(parentdir_prefix): return {"version": dirname[len(parentdir_prefix):], "full-revisionid": None, "dirty": False, "error": None, "date": None} else: rootdirs.append(root) root = os.path.dirname(root) # up a level if verbose: print("Tried directories %%s but none started with prefix %%s" %% (str(rootdirs), parentdir_prefix)) raise NotThisMethod("rootdir doesn't start with parentdir_prefix") @register_vcs_handler("git", "get_keywords") def git_get_keywords(versionfile_abs): """Extract version information from the given file.""" # the code embedded in _version.py can just fetch the value of these # keywords. When used from setup.py, we don't want to import _version.py, # so we do it with a regexp instead. This function is not used from # _version.py. keywords = {} try: f = open(versionfile_abs, "r") for line in f.readlines(): if line.strip().startswith("git_refnames ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["refnames"] = mo.group(1) if line.strip().startswith("git_full ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["full"] = mo.group(1) if line.strip().startswith("git_date ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["date"] = mo.group(1) f.close() except EnvironmentError: pass return keywords @register_vcs_handler("git", "keywords") def git_versions_from_keywords(keywords, tag_prefix, verbose): """Get version information from git keywords.""" if not keywords: raise NotThisMethod("no keywords at all, weird") date = keywords.get("date") if date is not None: # git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant # datestamp. However we prefer "%%ci" (which expands to an "ISO-8601 # -like" string, which we must then edit to make compliant), because # it's been around since git-1.5.3, and it's too difficult to # discover which version we're using, or to work around using an # older one. date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) refnames = keywords["refnames"].strip() if refnames.startswith("$Format"): if verbose: print("keywords are unexpanded, not using") raise NotThisMethod("unexpanded keywords, not a git-archive tarball") refs = set([r.strip() for r in refnames.strip("()").split(",")]) # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of # just "foo-1.0". If we see a "tag: " prefix, prefer those. TAG = "tag: " tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) if not tags: # Either we're using git < 1.8.3, or there really are no tags. We use # a heuristic: assume all version tags have a digit. The old git %%d # expansion behaves like git log --decorate=short and strips out the # refs/heads/ and refs/tags/ prefixes that would let us distinguish # between branches and tags. By ignoring refnames without digits, we # filter out many common branch names like "release" and # "stabilization", as well as "HEAD" and "master". tags = set([r for r in refs if re.search(r'\d', r)]) if verbose: print("discarding '%%s', no digits" %% ",".join(refs - tags)) if verbose: print("likely tags: %%s" %% ",".join(sorted(tags))) for ref in sorted(tags): # sorting will prefer e.g. "2.0" over "2.0rc1" if ref.startswith(tag_prefix): r = ref[len(tag_prefix):] if verbose: print("picking %%s" %% r) return {"version": r, "full-revisionid": keywords["full"].strip(), "dirty": False, "error": None, "date": date} # no suitable tags, so version is "0+unknown", but full hex is still there if verbose: print("no suitable tags, using unknown + full revision id") return {"version": "0+unknown", "full-revisionid": keywords["full"].strip(), "dirty": False, "error": "no suitable tags", "date": None} @register_vcs_handler("git", "pieces_from_vcs") def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): """Get version from 'git describe' in the root of the source tree. This only gets called if the git-archive 'subst' keywords were *not* expanded, and _version.py hasn't already been rewritten with a short version string, meaning we're inside a checked out source tree. """ GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True) if rc != 0: if verbose: print("Directory %%s not under git control" %% root) raise NotThisMethod("'git rev-parse --git-dir' returned error") # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] # if there isn't one, this yields HEX[-dirty] (no NUM) describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", "--always", "--long", "--match", "%%s*" %% tag_prefix], cwd=root) # --long was added in git-1.5.5 if describe_out is None: raise NotThisMethod("'git describe' failed") describe_out = describe_out.strip() full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) if full_out is None: raise NotThisMethod("'git rev-parse' failed") full_out = full_out.strip() pieces = {} pieces["long"] = full_out pieces["short"] = full_out[:7] # maybe improved later pieces["error"] = None # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] # TAG might have hyphens. git_describe = describe_out # look for -dirty suffix dirty = git_describe.endswith("-dirty") pieces["dirty"] = dirty if dirty: git_describe = git_describe[:git_describe.rindex("-dirty")] # now we have TAG-NUM-gHEX or HEX if "-" in git_describe: # TAG-NUM-gHEX mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) if not mo: # unparseable. Maybe git-describe is misbehaving? pieces["error"] = ("unable to parse git-describe output: '%%s'" %% describe_out) return pieces # tag full_tag = mo.group(1) if not full_tag.startswith(tag_prefix): if verbose: fmt = "tag '%%s' doesn't start with prefix '%%s'" print(fmt %% (full_tag, tag_prefix)) pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'" %% (full_tag, tag_prefix)) return pieces pieces["closest-tag"] = full_tag[len(tag_prefix):] # distance: number of commits since tag pieces["distance"] = int(mo.group(2)) # commit: short hex revision ID pieces["short"] = mo.group(3) else: # HEX: no tags pieces["closest-tag"] = None count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root) pieces["distance"] = int(count_out) # total number of commits # commit date: see ISO-8601 comment in git_versions_from_keywords() date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"], cwd=root)[0].strip() pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) return pieces def plus_or_dot(pieces): """Return a + if we don't already have one, else return a .""" if "+" in pieces.get("closest-tag", ""): return "." return "+" def render_pep440(pieces): """Build up version string, with post-release "local version identifier". Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty Exceptions: 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" return rendered def render_pep440_pre(pieces): """TAG[.post.devDISTANCE] -- No -dirty. Exceptions: 1: no tags. 0.post.devDISTANCE """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += ".post.dev%%d" %% pieces["distance"] else: # exception #1 rendered = "0.post.dev%%d" %% pieces["distance"] return rendered def render_pep440_post(pieces): """TAG[.postDISTANCE[.dev0]+gHEX] . The ".dev0" means dirty. Note that .dev0 sorts backwards (a dirty tree will appear "older" than the corresponding clean one), but you shouldn't be releasing software with -dirty anyways. Exceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += plus_or_dot(pieces) rendered += "g%%s" %% pieces["short"] else: # exception #1 rendered = "0.post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += "+g%%s" %% pieces["short"] return rendered def render_pep440_old(pieces): """TAG[.postDISTANCE[.dev0]] . The ".dev0" means dirty. Eexceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" else: # exception #1 rendered = "0.post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" return rendered def render_git_describe(pieces): """TAG[-DISTANCE-gHEX][-dirty]. Like 'git describe --tags --dirty --always'. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render_git_describe_long(pieces): """TAG-DISTANCE-gHEX[-dirty]. Like 'git describe --tags --dirty --always -long'. The distance/hash is unconditional. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render(pieces, style): """Render the given version pieces into the requested style.""" if pieces["error"]: return {"version": "unknown", "full-revisionid": pieces.get("long"), "dirty": None, "error": pieces["error"], "date": None} if not style or style == "default": style = "pep440" # the default if style == "pep440": rendered = render_pep440(pieces) elif style == "pep440-pre": rendered = render_pep440_pre(pieces) elif style == "pep440-post": rendered = render_pep440_post(pieces) elif style == "pep440-old": rendered = render_pep440_old(pieces) elif style == "git-describe": rendered = render_git_describe(pieces) elif style == "git-describe-long": rendered = render_git_describe_long(pieces) else: raise ValueError("unknown style '%%s'" %% style) return {"version": rendered, "full-revisionid": pieces["long"], "dirty": pieces["dirty"], "error": None, "date": pieces.get("date")} def get_versions(): """Get version information or return default if unable to do so.""" # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have # __file__, we can work backwards from there to the root. Some # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which # case we can only use expanded keywords. cfg = get_config() verbose = cfg.verbose try: return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose) except NotThisMethod: pass try: root = os.path.realpath(__file__) # versionfile_source is the relative path from the top of the source # tree (where the .git directory might live) to this file. Invert # this to find the root from __file__. for i in cfg.versionfile_source.split('/'): root = os.path.dirname(root) except NameError: return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to find root of source tree", "date": None} try: pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) return render(pieces, cfg.style) except NotThisMethod: pass try: if cfg.parentdir_prefix: return versions_from_parentdir(cfg.parentdir_prefix, root, verbose) except NotThisMethod: pass return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to compute version", "date": None} ''' @register_vcs_handler("git", "get_keywords") def git_get_keywords(versionfile_abs): """Extract version information from the given file.""" # the code embedded in _version.py can just fetch the value of these # keywords. When used from setup.py, we don't want to import _version.py, # so we do it with a regexp instead. This function is not used from # _version.py. keywords = {} try: f = open(versionfile_abs, "r") for line in f.readlines(): if line.strip().startswith("git_refnames ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["refnames"] = mo.group(1) if line.strip().startswith("git_full ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["full"] = mo.group(1) if line.strip().startswith("git_date ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["date"] = mo.group(1) f.close() except EnvironmentError: pass return keywords @register_vcs_handler("git", "keywords") def git_versions_from_keywords(keywords, tag_prefix, verbose): """Get version information from git keywords.""" if not keywords: raise NotThisMethod("no keywords at all, weird") date = keywords.get("date") if date is not None: # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant # datestamp. However we prefer "%ci" (which expands to an "ISO-8601 # -like" string, which we must then edit to make compliant), because # it's been around since git-1.5.3, and it's too difficult to # discover which version we're using, or to work around using an # older one. date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) refnames = keywords["refnames"].strip() if refnames.startswith("$Format"): if verbose: print("keywords are unexpanded, not using") raise NotThisMethod("unexpanded keywords, not a git-archive tarball") refs = set([r.strip() for r in refnames.strip("()").split(",")]) # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of # just "foo-1.0". If we see a "tag: " prefix, prefer those. TAG = "tag: " tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) if not tags: # Either we're using git < 1.8.3, or there really are no tags. We use # a heuristic: assume all version tags have a digit. The old git %d # expansion behaves like git log --decorate=short and strips out the # refs/heads/ and refs/tags/ prefixes that would let us distinguish # between branches and tags. By ignoring refnames without digits, we # filter out many common branch names like "release" and # "stabilization", as well as "HEAD" and "master". tags = set([r for r in refs if re.search(r'\d', r)]) if verbose: print("discarding '%s', no digits" % ",".join(refs - tags)) if verbose: print("likely tags: %s" % ",".join(sorted(tags))) for ref in sorted(tags): # sorting will prefer e.g. "2.0" over "2.0rc1" if ref.startswith(tag_prefix): r = ref[len(tag_prefix):] if verbose: print("picking %s" % r) return {"version": r, "full-revisionid": keywords["full"].strip(), "dirty": False, "error": None, "date": date} # no suitable tags, so version is "0+unknown", but full hex is still there if verbose: print("no suitable tags, using unknown + full revision id") return {"version": "0+unknown", "full-revisionid": keywords["full"].strip(), "dirty": False, "error": "no suitable tags", "date": None} @register_vcs_handler("git", "pieces_from_vcs") def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): """Get version from 'git describe' in the root of the source tree. This only gets called if the git-archive 'subst' keywords were *not* expanded, and _version.py hasn't already been rewritten with a short version string, meaning we're inside a checked out source tree. """ GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True) if rc != 0: if verbose: print("Directory %s not under git control" % root) raise NotThisMethod("'git rev-parse --git-dir' returned error") # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] # if there isn't one, this yields HEX[-dirty] (no NUM) describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", "--always", "--long", "--match", "%s*" % tag_prefix], cwd=root) # --long was added in git-1.5.5 if describe_out is None: raise NotThisMethod("'git describe' failed") describe_out = describe_out.strip() full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) if full_out is None: raise NotThisMethod("'git rev-parse' failed") full_out = full_out.strip() pieces = {} pieces["long"] = full_out pieces["short"] = full_out[:7] # maybe improved later pieces["error"] = None # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] # TAG might have hyphens. git_describe = describe_out # look for -dirty suffix dirty = git_describe.endswith("-dirty") pieces["dirty"] = dirty if dirty: git_describe = git_describe[:git_describe.rindex("-dirty")] # now we have TAG-NUM-gHEX or HEX if "-" in git_describe: # TAG-NUM-gHEX mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) if not mo: # unparseable. Maybe git-describe is misbehaving? pieces["error"] = ("unable to parse git-describe output: '%s'" % describe_out) return pieces # tag full_tag = mo.group(1) if not full_tag.startswith(tag_prefix): if verbose: fmt = "tag '%s' doesn't start with prefix '%s'" print(fmt % (full_tag, tag_prefix)) pieces["error"] = ("tag '%s' doesn't start with prefix '%s'" % (full_tag, tag_prefix)) return pieces pieces["closest-tag"] = full_tag[len(tag_prefix):] # distance: number of commits since tag pieces["distance"] = int(mo.group(2)) # commit: short hex revision ID pieces["short"] = mo.group(3) else: # HEX: no tags pieces["closest-tag"] = None count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root) pieces["distance"] = int(count_out) # total number of commits # commit date: see ISO-8601 comment in git_versions_from_keywords() date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[0].strip() pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) return pieces def do_vcs_install(manifest_in, versionfile_source, ipy): """Git-specific installation logic for Versioneer. For Git, this means creating/changing .gitattributes to mark _version.py for export-subst keyword substitution. """ GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] files = [manifest_in, versionfile_source] if ipy: files.append(ipy) try: me = __file__ if me.endswith(".pyc") or me.endswith(".pyo"): me = os.path.splitext(me)[0] + ".py" versioneer_file = os.path.relpath(me) except NameError: versioneer_file = "versioneer.py" files.append(versioneer_file) present = False try: f = open(".gitattributes", "r") for line in f.readlines(): if line.strip().startswith(versionfile_source): if "export-subst" in line.strip().split()[1:]: present = True f.close() except EnvironmentError: pass if not present: f = open(".gitattributes", "a+") f.write("%s export-subst\n" % versionfile_source) f.close() files.append(".gitattributes") run_command(GITS, ["add", "--"] + files) def versions_from_parentdir(parentdir_prefix, root, verbose): """Try to determine the version from the parent directory name. Source tarballs conventionally unpack into a directory that includes both the project name and a version string. We will also support searching up two directory levels for an appropriately named parent directory """ rootdirs = [] for i in range(3): dirname = os.path.basename(root) if dirname.startswith(parentdir_prefix): return {"version": dirname[len(parentdir_prefix):], "full-revisionid": None, "dirty": False, "error": None, "date": None} else: rootdirs.append(root) root = os.path.dirname(root) # up a level if verbose: print("Tried directories %s but none started with prefix %s" % (str(rootdirs), parentdir_prefix)) raise NotThisMethod("rootdir doesn't start with parentdir_prefix") SHORT_VERSION_PY = """ # This file was generated by 'versioneer.py' (0.18) from # revision-control system data, or from the parent directory name of an # unpacked source archive. Distribution tarballs contain a pre-generated copy # of this file. import json version_json = ''' %s ''' # END VERSION_JSON def get_versions(): return json.loads(version_json) """ def versions_from_file(filename): """Try to determine the version from _version.py if present.""" try: with open(filename) as f: contents = f.read() except EnvironmentError: raise NotThisMethod("unable to read _version.py") mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON", contents, re.M | re.S) if not mo: mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON", contents, re.M | re.S) if not mo: raise NotThisMethod("no version_json in _version.py") return json.loads(mo.group(1)) def write_to_version_file(filename, versions): """Write the given version number to the given _version.py file.""" os.unlink(filename) contents = json.dumps(versions, sort_keys=True, indent=1, separators=(",", ": ")) with open(filename, "w") as f: f.write(SHORT_VERSION_PY % contents) print("set %s to '%s'" % (filename, versions["version"])) def plus_or_dot(pieces): """Return a + if we don't already have one, else return a .""" if "+" in pieces.get("closest-tag", ""): return "." return "+" def render_pep440(pieces): """Build up version string, with post-release "local version identifier". Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty Exceptions: 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += "%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" return rendered def render_pep440_pre(pieces): """TAG[.post.devDISTANCE] -- No -dirty. Exceptions: 1: no tags. 0.post.devDISTANCE """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += ".post.dev%d" % pieces["distance"] else: # exception #1 rendered = "0.post.dev%d" % pieces["distance"] return rendered def render_pep440_post(pieces): """TAG[.postDISTANCE[.dev0]+gHEX] . The ".dev0" means dirty. Note that .dev0 sorts backwards (a dirty tree will appear "older" than the corresponding clean one), but you shouldn't be releasing software with -dirty anyways. Exceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += plus_or_dot(pieces) rendered += "g%s" % pieces["short"] else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += "+g%s" % pieces["short"] return rendered def render_pep440_old(pieces): """TAG[.postDISTANCE[.dev0]] . The ".dev0" means dirty. Eexceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" return rendered def render_git_describe(pieces): """TAG[-DISTANCE-gHEX][-dirty]. Like 'git describe --tags --dirty --always'. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render_git_describe_long(pieces): """TAG-DISTANCE-gHEX[-dirty]. Like 'git describe --tags --dirty --always -long'. The distance/hash is unconditional. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render(pieces, style): """Render the given version pieces into the requested style.""" if pieces["error"]: return {"version": "unknown", "full-revisionid": pieces.get("long"), "dirty": None, "error": pieces["error"], "date": None} if not style or style == "default": style = "pep440" # the default if style == "pep440": rendered = render_pep440(pieces) elif style == "pep440-pre": rendered = render_pep440_pre(pieces) elif style == "pep440-post": rendered = render_pep440_post(pieces) elif style == "pep440-old": rendered = render_pep440_old(pieces) elif style == "git-describe": rendered = render_git_describe(pieces) elif style == "git-describe-long": rendered = render_git_describe_long(pieces) else: raise ValueError("unknown style '%s'" % style) return {"version": rendered, "full-revisionid": pieces["long"], "dirty": pieces["dirty"], "error": None, "date": pieces.get("date")} class VersioneerBadRootError(Exception): """The project root directory is unknown or missing key files.""" def get_versions(verbose=False): """Get the project version from whatever source is available. Returns dict with two keys: 'version' and 'full'. """ if "versioneer" in sys.modules: # see the discussion in cmdclass.py:get_cmdclass() del sys.modules["versioneer"] root = get_root() cfg = get_config_from_root(root) assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg" handlers = HANDLERS.get(cfg.VCS) assert handlers, "unrecognized VCS '%s'" % cfg.VCS verbose = verbose or cfg.verbose assert cfg.versionfile_source is not None, \ "please set versioneer.versionfile_source" assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix" versionfile_abs = os.path.join(root, cfg.versionfile_source) # extract version from first of: _version.py, VCS command (e.g. 'git # describe'), parentdir. This is meant to work for developers using a # source checkout, for users of a tarball created by 'setup.py sdist', # and for users of a tarball/zipball created by 'git archive' or github's # download-from-tag feature or the equivalent in other VCSes. get_keywords_f = handlers.get("get_keywords") from_keywords_f = handlers.get("keywords") if get_keywords_f and from_keywords_f: try: keywords = get_keywords_f(versionfile_abs) ver = from_keywords_f(keywords, cfg.tag_prefix, verbose) if verbose: print("got version from expanded keyword %s" % ver) return ver except NotThisMethod: pass try: ver = versions_from_file(versionfile_abs) if verbose: print("got version from file %s %s" % (versionfile_abs, ver)) return ver except NotThisMethod: pass from_vcs_f = handlers.get("pieces_from_vcs") if from_vcs_f: try: pieces = from_vcs_f(cfg.tag_prefix, root, verbose) ver = render(pieces, cfg.style) if verbose: print("got version from VCS %s" % ver) return ver except NotThisMethod: pass try: if cfg.parentdir_prefix: ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose) if verbose: print("got version from parentdir %s" % ver) return ver except NotThisMethod: pass if verbose: print("unable to compute version") return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to compute version", "date": None} def get_version(): """Get the short version string for this project.""" return get_versions()["version"] def get_cmdclass(): """Get the custom setuptools/distutils subclasses used by Versioneer.""" if "versioneer" in sys.modules: del sys.modules["versioneer"] # this fixes the "python setup.py develop" case (also 'install' and # 'easy_install .'), in which subdependencies of the main project are # built (using setup.py bdist_egg) in the same python process. Assume # a main project A and a dependency B, which use different versions # of Versioneer. A's setup.py imports A's Versioneer, leaving it in # sys.modules by the time B's setup.py is executed, causing B to run # with the wrong versioneer. Setuptools wraps the sub-dep builds in a # sandbox that restores sys.modules to it's pre-build state, so the # parent is protected against the child's "import versioneer". By # removing ourselves from sys.modules here, before the child build # happens, we protect the child from the parent's versioneer too. # Also see https://github.com/warner/python-versioneer/issues/52 cmds = {} # we add "version" to both distutils and setuptools from distutils.core import Command class cmd_version(Command): description = "report generated version string" user_options = [] boolean_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): vers = get_versions(verbose=True) print("Version: %s" % vers["version"]) print(" full-revisionid: %s" % vers.get("full-revisionid")) print(" dirty: %s" % vers.get("dirty")) print(" date: %s" % vers.get("date")) if vers["error"]: print(" error: %s" % vers["error"]) cmds["version"] = cmd_version # we override "build_py" in both distutils and setuptools # # most invocation pathways end up running build_py: # distutils/build -> build_py # distutils/install -> distutils/build ->.. # setuptools/bdist_wheel -> distutils/install ->.. # setuptools/bdist_egg -> distutils/install_lib -> build_py # setuptools/install -> bdist_egg ->.. # setuptools/develop -> ? # pip install: # copies source tree to a tempdir before running egg_info/etc # if .git isn't copied too, 'git describe' will fail # then does setup.py bdist_wheel, or sometimes setup.py install # setup.py egg_info -> ? # we override different "build_py" commands for both environments if "setuptools" in sys.modules: from setuptools.command.build_py import build_py as _build_py else: from distutils.command.build_py import build_py as _build_py class cmd_build_py(_build_py): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() _build_py.run(self) # now locate _version.py in the new build/ directory and replace # it with an updated value if cfg.versionfile_build: target_versionfile = os.path.join(self.build_lib, cfg.versionfile_build) print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) cmds["build_py"] = cmd_build_py if "cx_Freeze" in sys.modules: # cx_freeze enabled? from cx_Freeze.dist import build_exe as _build_exe # nczeczulin reports that py2exe won't like the pep440-style string # as FILEVERSION, but it can be used for PRODUCTVERSION, e.g. # setup(console=[{ # "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION # "product_version": versioneer.get_version(), # ... class cmd_build_exe(_build_exe): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() target_versionfile = cfg.versionfile_source print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) _build_exe.run(self) os.unlink(target_versionfile) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) cmds["build_exe"] = cmd_build_exe del cmds["build_py"] if 'py2exe' in sys.modules: # py2exe enabled? try: from py2exe.distutils_buildexe import py2exe as _py2exe # py3 except ImportError: from py2exe.build_exe import py2exe as _py2exe # py2 class cmd_py2exe(_py2exe): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() target_versionfile = cfg.versionfile_source print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) _py2exe.run(self) os.unlink(target_versionfile) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) cmds["py2exe"] = cmd_py2exe # we override different "sdist" commands for both environments if "setuptools" in sys.modules: from setuptools.command.sdist import sdist as _sdist else: from distutils.command.sdist import sdist as _sdist class cmd_sdist(_sdist): def run(self): versions = get_versions() self._versioneer_generated_versions = versions # unless we update this, the command will keep using the old # version self.distribution.metadata.version = versions["version"] return _sdist.run(self) def make_release_tree(self, base_dir, files): root = get_root() cfg = get_config_from_root(root) _sdist.make_release_tree(self, base_dir, files) # now locate _version.py in the new base_dir directory # (remembering that it may be a hardlink) and replace it with an # updated value target_versionfile = os.path.join(base_dir, cfg.versionfile_source) print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, self._versioneer_generated_versions) cmds["sdist"] = cmd_sdist return cmds CONFIG_ERROR = """ setup.cfg is missing the necessary Versioneer configuration. You need a section like: [versioneer] VCS = git style = pep440 versionfile_source = src/myproject/_version.py versionfile_build = myproject/_version.py tag_prefix = parentdir_prefix = myproject- You will also need to edit your setup.py to use the results: import versioneer setup(version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), ...) Please read the docstring in ./versioneer.py for configuration instructions, edit setup.cfg, and re-run the installer or 'python versioneer.py setup'. """ SAMPLE_CONFIG = """ # See the docstring in versioneer.py for instructions. Note that you must # re-run 'versioneer.py setup' after changing this section, and commit the # resulting files. [versioneer] #VCS = git #style = pep440 #versionfile_source = #versionfile_build = #tag_prefix = #parentdir_prefix = """ INIT_PY_SNIPPET = """ from ._version import get_versions __version__ = get_versions()['version'] del get_versions """ def do_setup(): """Main VCS-independent setup function for installing Versioneer.""" root = get_root() try: cfg = get_config_from_root(root) except (EnvironmentError, configparser.NoSectionError, configparser.NoOptionError) as e: if isinstance(e, (EnvironmentError, configparser.NoSectionError)): print("Adding sample versioneer config to setup.cfg", file=sys.stderr) with open(os.path.join(root, "setup.cfg"), "a") as f: f.write(SAMPLE_CONFIG) print(CONFIG_ERROR, file=sys.stderr) return 1 print(" creating %s" % cfg.versionfile_source) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) ipy = os.path.join(os.path.dirname(cfg.versionfile_source), "__init__.py") if os.path.exists(ipy): try: with open(ipy, "r") as f: old = f.read() except EnvironmentError: old = "" if INIT_PY_SNIPPET not in old: print(" appending to %s" % ipy) with open(ipy, "a") as f: f.write(INIT_PY_SNIPPET) else: print(" %s unmodified" % ipy) else: print(" %s doesn't exist, ok" % ipy) ipy = None # Make sure both the top-level "versioneer.py" and versionfile_source # (PKG/_version.py, used by runtime code) are in MANIFEST.in, so # they'll be copied into source distributions. Pip won't be able to # install the package without this. manifest_in = os.path.join(root, "MANIFEST.in") simple_includes = set() try: with open(manifest_in, "r") as f: for line in f: if line.startswith("include "): for include in line.split()[1:]: simple_includes.add(include) except EnvironmentError: pass # That doesn't cover everything MANIFEST.in can do # (http://docs.python.org/2/distutils/sourcedist.html#commands), so # it might give some false negatives. Appending redundant 'include' # lines is safe, though. if "versioneer.py" not in simple_includes: print(" appending 'versioneer.py' to MANIFEST.in") with open(manifest_in, "a") as f: f.write("include versioneer.py\n") else: print(" 'versioneer.py' already in MANIFEST.in") if cfg.versionfile_source not in simple_includes: print(" appending versionfile_source ('%s') to MANIFEST.in" % cfg.versionfile_source) with open(manifest_in, "a") as f: f.write("include %s\n" % cfg.versionfile_source) else: print(" versionfile_source already in MANIFEST.in") # Make VCS-specific changes. For git, this means creating/changing # .gitattributes to mark _version.py for export-subst keyword # substitution. do_vcs_install(manifest_in, cfg.versionfile_source, ipy) return 0 def scan_setup_py(): """Validate the contents of setup.py against Versioneer's expectations.""" found = set() setters = False errors = 0 with open("setup.py", "r") as f: for line in f.readlines(): if "import versioneer" in line: found.add("import") if "versioneer.get_cmdclass()" in line: found.add("cmdclass") if "versioneer.get_version()" in line: found.add("get_version") if "versioneer.VCS" in line: setters = True if "versioneer.versionfile_source" in line: setters = True if len(found) != 3: print("") print("Your setup.py appears to be missing some important items") print("(but I might be wrong). Please make sure it has something") print("roughly like the following:") print("") print(" import versioneer") print(" setup( version=versioneer.get_version(),") print(" cmdclass=versioneer.get_cmdclass(), ...)") print("") errors += 1 if setters: print("You should remove lines like 'versioneer.VCS = ' and") print("'versioneer.versionfile_source = ' . This configuration") print("now lives in setup.cfg, and should be removed from setup.py") print("") errors += 1 return errors if __name__ == "__main__": cmd = sys.argv[1] if cmd == "setup": errors = do_setup() errors += scan_setup_py() if errors: sys.exit(1)