debian/0000755000000000000000000000000012260123564007167 5ustar debian/qiime-doc.doc-base0000644000000000000000000000410712214301047012430 0ustar Document: qiime Title: QIIME: Quantitative Insights Into Microbial Ecology Author: Greg Caporaso Abstract: Quantitative Insights Into Microbial Ecology QIIME (canonically pronounced ‘Chime’) is a pipeline for performing microbial community analysis that integrates many third party tools which have become standard in the field. . Rather than reimplementing commonly used algorithms, QIIME wraps popular implementations of those algorithms. This allows us to make use of the many excellent tools available in this area, and allows faster integration of new tools. If you use tools that you think would be useful additions to QIIME, consider submitting a feature request. . A standard QIIME analysis begins with sequence data from one or more sequencing platforms, including Sanger, Roche/454, and Illumina GAIIx. QIIME can perform library de-multiplexing and quality filtering; denoising with AmpliconNoise or the QIIME Denoiser; OTU and representative set picking with uclust, cdhit, mothur, BLAST, or other tools; taxonomy assignment with BLAST or the RDP classifier; sequence alignment with PyNAST, muscle, infernal, or other tools; phylogeny reconstruction with FastTree, raxml, clearcut, or other tools; alpha diversity and rarefaction, including visualization of results, using over 20 metrics including Phylogenetic Diversity, chao1, and observed species; beta diversity and rarefaction, including visualization of results, using over 25 metrics including weighted and unweighted UniFrac, Euclidean distance, and Bray-Curtis; summarization and visualization of taxonomic composition of samples using area, bar and pie charts along with distance histograms; and many other features. While QIIME is primarily used for analysis of amplicon data, many of the downstream analysis pipeline (such as alpha rarefaction and jackknifed beta diversity) can be performed on any type of sample x observation tables if they are formatted correctly. Section: Science/Biology Format: html Files: /usr/share/doc/qiime/html/* Index: /usr/share/doc/qiime/html/index.html debian/qiime.links0000644000000000000000000000120612256747443011351 0ustar usr/bin/qiime usr/bin/denoiser usr/share/java/king.jar usr/lib/qiime/support_files/jar/king.jar # Needed because QIIME wants to call the BIOM format tools this way usr/bin/add_metadata usr/lib/qiime/bin/add_metadata.py # usr/bin/test_biom_validator usr/lib/qiime/bin/test_biom_validator.py usr/bin/biom_validator usr/lib/qiime/bin/biom_validator.py usr/bin/subset_biom usr/lib/qiime/bin/subset_biom.py usr/bin/convert_biom usr/lib/qiime/bin/convert_biom.py usr/bin/print_biom_python_config usr/lib/qiime/bin/print_biom_python_config.py usr/bin/print_biom_table_summary usr/lib/qiime/bin/print_biom_table_summary.py debian/source/0000755000000000000000000000000012214301047010460 5ustar debian/source/format0000644000000000000000000000001412214301047011666 0ustar 3.0 (quilt) debian/README.source0000644000000000000000000000274512214301047011347 0ustar QIIME for Debian - source ========================= .py endings and the qiime wrapper of Bio-Linux ---------------------------------------------- Previous comment by Steffen: Lintian complains a lot about script-with-language-extension, i.e. the .py endings for files ending up in /usr/bin. What to do about this is not clear for the moment. For the time speaking it seems like lintian is wrong here, too deeply embedded is python in the project and Debian does not want to become incompatible. New comments by Tim: I borrowed the Bio-Linux approach for Qiime which may or may not be the best idea. Essentially, the Python scripts are not in the path and instead of running: % my_qiime_app.py You run: % qiime my_qiime_app.py or equivalently just: % qiime my_qiime_app The 'qiime' wrapper script adds the extension if needed and sets the path. If run with no arguments it sets the path and drops to an interactive shell. The dependencies of Qiime essentially make it non-free despite the DFSG licence on the Qiime code itself. Chief among these is UClust. This package is supposed to handle the lack of UClust gracefully but it is still up to the user to fetch and install it. Other dependencies/TODO: ------------------------ Everything else needed should be packaged, if not in Debian proper then in the SVN. denoiser - no longer a dependency as it has been folded into Qiime itself. several recommended deps should probably be hard depends, as long as they are available in the archive debian/copyright0000644000000000000000000000310012214301047011105 0ustar Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: QIIME Upstream-Contact: Greg Caporaso Source: http://sourceforge.net/projects/qiime/files/releases/ Files-Excluded: *.jar Files: * Copyright: 2010-2012 Greg Caporaso , Jens Reeder , Kyle Bittinger and others License: GPL-2+ This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. . This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. . You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. . For other licensing arrangements, please contact Daniel J. Schaid. . On a Debian system, the GNU GPL license version 2 is distributed in the file ./usr/share/common-licenses/GPL-2.. Files: debian/* Copyright: 2010 Sri Girish Srinivasa Murthy Steffen Moeller 2011-2012 Tim Booth , Andreas Tille License: GPL-2+ Same as software itself. debian/scripts/0000755000000000000000000000000012214301047010647 5ustar debian/scripts/qiime0000644000000000000000000000267712214301047011712 0ustar #!/bin/sh # Set up a Qiime working environment. This should either run the Qiime command directly # or else drop the user to a shell where Qiime is ready to run. # For now, if $SHELL is zsh use that, otherwise run bash. if [ "$*" = "" ] ; then #Run a shell if [ `basename "$SHELL"` = "zsh" ] ; then export ZOLDDOTDIR=${ZSHDOTDIR:-~} export ZDOTDIR=/usr/lib/qiime/shell QIIMESHELL="$SHELL" QSNAME="ZSH ($SHELL)" elif [ `basename "$SHELL"` = "bash" ] ; then #Set bash dot directory. QIIMESHELL="$SHELL --rcfile /usr/lib/qiime/shell/.bashrc" QSNAME="BASH ($SHELL)" else if which bash > /dev/null ; then QIIMESHELL="bash --rcfile /usr/lib/qiime/shell/.bashrc" QSNAME="BASH" else #This should never happen due to package dependencies. echo "To start an interactive Qiime shell, you need to have 'bash' available and" echo "in your path." exit 1 fi fi #Determine Qiime version: QIIME_VERSION=`/usr/lib/qiime/bin/print_qiime_config.py --version 2>/dev/null | awk '{print $NF}'` QIIME_VERSION=${QIIME_VERSION:-UNKNOWN} echo """ Setting up $QSNAME environment to run QIIME commands. You have QIIME version $QIIME_VERSION. Type 'help' for more info, 'exit' to return to regular shell. """ eval exec $QIIMESHELL else #Just run the command # Set environment . /usr/lib/qiime/shell/qiime_environment cmd=`basename "$1" .py`.py shift exec "/usr/lib/qiime/bin/$cmd" "$@" fi debian/scripts/shell/0000755000000000000000000000000012214301047011756 5ustar debian/scripts/shell/.zshrc0000644000000000000000000000040012214301047013102 0ustar # Startup script for ZSH with Qiime environment. if [ -e "$ZOLDDOTDIR/.zshrc" ] ; then source "$ZOLDDOTDIR/.zshrc" unset ZOLDDOTDIR fi source /usr/lib/qiime/shell/qiime_environment help() { cat "$QIIME_HELP_TEXT" } PROMPT="$PROMPT qiime > " debian/scripts/shell/qiime_environment0000644000000000000000000000244512214301047015436 0ustar #In the /usr/lib/qiime/shell directory are .zshrc and .bashrc files to set up the environment #needed for running Qiime. There may be a less hacky way of doing this but it works for now. # Qiime basic environment. Note that sourcing this file multiple times is not idempotent # and should be avoided! # This file needs to be readable by /bin/sh and zsh so no bash-isms allowed. export PYTHONPATH=$PYTHONPATH/:/usr/share/pyshared/qiime export PATH="$PATH:/usr/lib/qiime/bin:/usr/share/ampliconnoise/Scripts:/usr/lib/ChimeraSlayer" export QIIME_HELP_TEXT=/usr/lib/qiime/shell/qiime_help export QIIME_CONFIG_FP=${QIIME_CONFIG_FP:-/etc/qiime/qiime_config} export BLASTMAT=${BLASTMAT:-/usr/share/ncbi/data} # This was previously set in bio-linux-rdp-classifier but can now live here # as the rdp-classifier package has been normalised. export RDP_JAR_PATH=${RDP_JAR_PATH:-/usr/share/rdp-classifier/rdp_classifier.jar} #RDP_classifier is very bad at admitting what version it is, so ask DPKG: export RDP_JAR_VERSION_OK=`dpkg-query -f '${Version}\n' -W rdp-classifier | sed 's/[~+-].*//'` # Mike Cox reports these are wanted for Ampliconnoise export PYRO_LOOKUP_FILE=${PYRO_LOOKUP_FILE:-/usr/share/ampliconnoise/Data/LookUp_E123.dat} export SEQ_LOOKUP_FILE=${SEQ_LOOKUP_FILE:-/usr/share/ampliconnoise/Data/Tran.dat} debian/scripts/shell/qiime_help0000644000000000000000000000325212214301047014017 0ustar This is a proto-manpage for Qiime on Debian/Bio-Linux. Some of this info alternatively belongs in README.Debian. INVOCATION: To run a Qiime command such as check_id_map.py: % qiime check_id_map [...] or just type 'qiime' to get a shell environment where all Qiime commands are available, ie: % qiime % qiime > check_id_map.py [...] PARAMETERS FILE: For certain qiime commands, you need to indicate where your qiime parameters file is. You should make a copy of the default version at /usr/share/doc/qiime/qiime_parameters.txt and edit it to suit your needs. A key thing to edit is the location of your greengenes database and lanemask files. You either need to have your edited copy of your qiime parameter file in your working directory, or you need to give the path to the file using the -p parameter in the relevant qiime commands. GREENGENES DATA: Make sure that you have copies of the greengenes core set data file (fasta) and the greengenes alignment lanemask file installed. These do NOT come with the bio-linux-qiime package. You must edit your custom parameter file in your working directory to give the full path to these files. QIIME CONFIGURATION: Qiime reads configuration information from the file specified by QIIME_CONFIG_FP. Your QIIME_CONFIG_FP is set to /etc/qiime/qiime_config. It is unlikely you will need to change the settings in this file. UCLUST: You may need to install UClust manually to use some Qiime functions. For more info, try running 'uclust' at the qiime prompt. For more information about Qiime, please refer to the Qiime documentation at: http://qiime.sourceforge.net/ debian/scripts/shell/.bashrc0000644000000000000000000000045212214301047013222 0ustar # Startup script for BASH with Qiime environment. # Source the user's default environent. test -e "/etc/bash.bashrc" && source "/etc/bash.bashrc" test -e "~/.bashrc" && source "~/.bashrc" help() { cat "$QIIME_HELP_TEXT" } source /usr/lib/qiime/shell/qiime_environment PS1="$PS1 qiime > " debian/scripts/usearch0000644000000000000000000000164212214301047012227 0ustar #!/bin/sh # # Added by Tim Booth # QIIME wants very much to use the USEARCH binary, but this is not free software. # However, it is available right now free-of-charge if you only want the # 32-bit version. # Redistribution requires a licence, which I could apply for, but then nobody # but me can redistribute BL, which is no good! # See if usearch.real is available anywhere if which usearch.real >/dev/null ; then exec usearch.real "$@" fi echo """\r\ USEARCH is not freely redistributable and is thus not included in the default QIIME package. You may obtain a personal copy of the 32-bit program at no charge. To use this feature, please go to: http://www.drive5.com/usearch/download.html Download USEARCH v5.2.236, then: sudo mv usearch* /usr/local/bin/usearch sudo chmod a+x /usr/local/bin/usearch You probably also want to install USEARCH 6.1 as /usr/local/bin/usearch61 """ exit 1 debian/scripts/usearch610000644000000000000000000000164412214301047012400 0ustar #!/bin/sh # # Added by Tim Booth # QIIME wants very much to use the USEARCH binary, but this is not free software. # However, it is available right now free-of-charge if you only want the # 32-bit version. # Redistribution requires a licence, which I could apply for, but then nobody # but me can redistribute BL, which is no good! # See if usearch.real is available anywhere if which usearch61.real >/dev/null ; then exec usearch61.real "$@" fi echo """\r\ USEARCH 6 is not freely redistributable and is thus not included in the default QIIME package. You may obtain a personal copy of the 32-bit program at no charge. To use this feature, please go to: http://www.drive5.com/usearch/download.html Download USEARCH v6.1, then: sudo mv usearch* /usr/local/bin/usearch61 sudo chmod a+x /usr/local/bin/usearch61 You probably also want to install USEARCH 5 as /usr/local/bin/usearch """ exit 1 debian/scripts/qiime_config0000644000000000000000000000076212214301047013230 0ustar # qiime_config # WARNING: DO NOT EDIT OR DELETE Qiime/qiime_config # To overwrite defaults, copy this file to $HOME/.qiime_config or a full path # specified by $QIIME_CONFIG_FP and edit that copy of the file. cluster_jobs_fp python_exe_fp python working_dir . blastmat_dir /usr/share/ncbi/data blastall_fp blastall pynast_template_alignment_fp pynast_template_alignment_blastdb template_alignment_lanemask_fp jobs_to_start 1 seconds_to_sleep 60 qiime_scripts_dir /usr/lib/qiime/bin/ temp_dir /tmp debian/scripts/uclust0000644000000000000000000000173412214301047012116 0ustar #!/bin/sh # # Added by Tim Booth # QIIME wants very much to use the UClust binary, but this is not free software. # However, it is available right now free-of-charge if: # a) You are an academic user and only want the 32-bit version. # b) You promise to use the UClust application only as part of QIIME. # # My plan is to put the no-cost UClust into bio-linux-qiime. If someone # installs just the reglar QIIME package they need to know what to do... # See if uclust.real is available. if which uclust.real >/dev/null ; then exec uclust.real "$@" fi if [ `uname -m` = x86_64 ] ; then bits='64-bit' else bits='32-bit' fi echo """\r\ UClust is not freely redistributable and is thus not included in the default QIIME package. To use this feature, please go to: http://www.drive5.com/uclust/downloads1_2_22q.html Download the $bits binary, then: sudo cp uclustq1.2.22_* /usr/local/bin/uclust sudo chmod a+x /usr/local/bin/uclust """ exit 1 debian/qiime.README.Debian0000644000000000000000000000427012214301047012327 0ustar Quick Start =========== Type 'qiime' at the shell prompt before running any of the standard qiime commands. QIIME for Debian and Ubuntu =========================== QIIME is a very powerful environment, but the core of its power stems from the tools underneath. Not all those tools that QIIME can work with are yet available with Debian. If you find resources to help - please do. Most of the needed tools are available via the Bio-Linux project: http://nebc.nerc.ac.uk/tools/bio-linux . This package was one of the first to profit from now joint forces on getting the packages to a best-possible shape for everyone. The package is in experimental since: * it depends on python-pynast that is in experimental - and since like pynast it uses the sphinx documentation system that it not yet uses to streamline the orig.tar.gz and recreate the documentation from its sources. When adding python-sphinx the build fails because of a conflict to orig.tar.gz that could not yet be resolved. * qiime still lacks a man page The distribution is set to contrib because of * the strong recommends to use uclust that is not shipped with Debian as a binary only package. Disambiguation of versions in bio-linux, Debian and Ubuntu ---------------------------------------------------------- The conflict reports in the control file is originally indiciated against bio-linux-qiime (<= bl1.1.x). The prefix "bl" is unfortunate since the Debian packaging tools demand (and the Debian policy) demand the version to start with a number, not a character. The version is now removed from that indicated conflict. Depending on who was performing the last editing of the package, versions are either specified with a suffix "-N", N numeric, for uploads to Debian, or with the suffix "-ubuntuN". The changelog shall show the Ns to increase and be reset for new upstream versions. This is yet slightly different from the common practice to have the Ubuntu vesion keep the last Debian version (or start a new one as 0) and have the version extended to "NubuntuM" with M reset to 1 with every new Debian version. Either way makes sense. The current approach indiciates better that some upload to Debian may not yet have happened. debian/upstream0000644000000000000000000000153512214301047010747 0ustar Name: QIIME Homepage: http://www.qiime.org/ Reference: Author: J Gregory Caporaso and Justin Kuczynski and Stombaugh Jesse and Bittinger Kyle and Bushman Frederic D and Costello Elizabeth K and Fierer Noah and Pena Antonio Gonzalez and Goodrich Julia K and Gordon Jeffrey I and Huttley Gavin A and Kelley Scott T and Knights Dan and Koenig Jeremy E and Ley Ruth E and Lozupone Catherine A and McDonald Daniel and Muegge Brian D and Pirrung Meg and Reeder Jens and Sevinsky Joel R and Turnbaugh Peter J and Walters William A and Widmann Jeremy and Yatsunenko Tanya and Zaneveld Jesse and Knight Rob Title: QIIME allows analysis of high-throughput community sequencing data Journal: Nature Methods Volume: 7 Pages: 335 - 336 Year: 2010 DOI: 10.1038/nmeth.f.303 PMID: 20383131 URL: http://www.nature.com/nmeth/journal/v7/n5/full/nmeth.f.303.html debian/rules0000755000000000000000000000512312256757432010264 0ustar #!/usr/bin/make -f # -*- makefile -*- # debian/rules for Qiime # Tim Booth , Andreas Tille # GPL # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 pkg := $(shell dpkg-parsechangelog | sed -n 's/^Source: //p') #Allowing this to be overridden by environemnt helps with backports DEB_PYTHON_SUPPORT?=python2 %: dh $@ --with $(DEB_PYTHON_SUPPORT) override_dh_compress: dh_compress \ --exclude=.js \ --exclude=.sff \ --exclude=.qual \ --exclude=.fna \ --exclude=.fna.txt override_dh_auto_clean: #Calling "python setup.py clean -a" actually triggers a #rebuild, so don't do it. rm -rf build rm -rf doc/_build ( cd $(pkg)/support_files/denoiser/FlowgramAlignment && make clean ) || true rm -f $(pkg)/support_files/denoiser/bin/* #In case the tarball was not re-packed, remove any .jars find -name '*.jar' -delete #And in case any Python script was run, remove the .pyc find qiime scripts -name '*.pyc' -delete # Remove remainings from running the test suite rm -f BLAST_temp_db* rm -rf jobs export ROOTDIR=debian/$(pkg) #Lots of shuffling to be done... # Python scripts go into /usr/lib/$(pkg)/bin (not /usr/bin) # Helper script goes into /usr/bin # Setup scripts go into /usr/lib/$(pkg)/shell # Default configuration goes into /etc/$(pkg)/default_qiime_config # ...see debian/*.install files override_dh_install: dh_install chmod a+x $(ROOTDIR)/usr/lib/qiime/bin/uclust || true chmod a+x $(ROOTDIR)/usr/lib/qiime/bin/usearch* || true chmod -R a+rX $(ROOTDIR)/usr/lib/qiime/shell || true chmod a-x $(ROOTDIR)/usr/share/qiime/support_files/R/* || true chmod a-x $(ROOTDIR)/usr/share/qiime/support_files/js/* || true #Fix lintian warning for jquery rm $(ROOTDIR)/usr/share/qiime/support_files/js/jquery.js && \ ln -s /usr/share/javascript/jquery/jquery.js $(ROOTDIR)/usr/share/qiime/support_files/js #Make some symlinks as Qiime assumes a single dir structure for path in denoiser/Data css images R js ; do \ dh_link -pqiime /usr/share/qiime/support_files/"$$path" /usr/lib/qiime/support_files/"$$path" ;\ done override_dh_installchangelogs: dh_installchangelogs ChangeLog.md override_dh_builddeb: dh_builddeb -- -Z xz # Remark: The following uscan command requires devscripts > 2.12.4 which is not # yet released at the time of this package release. The code can be obtained # via # git clone git://tille@git.debian.org/git/users/tille/devscripts.git # and then use scripts/uscan.pl get-orig-source: mkdir -p ../tarballs uscan --verbose --force-download --destdir=../tarballs --repack-compression xz debian/get-orig-source0000755000000000000000000000161712214301047012126 0ustar #!/bin/sh # get source for king and strip binary JARs COMPRESSION=xz set -e NAME=`dpkg-parsechangelog | awk '/^Source/ { print $2 }'` if ! echo $@ | grep -q upstream-version ; then VERSION=`dpkg-parsechangelog | awk '/^Version:/ { print $2 }' | sed 's/\([0-9\.]\+\)-[0-9]\+$/\1/'` else VERSION=`echo $@ | sed "s?^.*--upstream-version \([0-9.]\+\) .*${name}.*?\1?"` if echo "$VERSION" | grep -q "upstream-version" ; then echo "Unable to parse version number" exit fi fi # Upstream tarball has upper case 'Q' UPSTREAMNAME=`echo ${NAME} | tr [q] [Q]` TARDIR=${UPSTREAMNAME}-${VERSION} mkdir -p ../tarballs cd ../tarballs tar xaf ../${UPSTREAMNAME}-${VERSION}.tar.gz # Remove useless JAR and CLASS files find . -name "*.jar" -delete GZIP="--best --no-name" tar --owner=root --group=root --mode=a+rX -caf "$NAME"_"$VERSION".orig.tar.${COMPRESSION} "${TARDIR}" rm -rf "$TARDIR" debian/qiime.dirs0000644000000000000000000000002212214301047011141 0ustar usr/lib/qiime/bin debian/patches/0000755000000000000000000000000012257135266010626 5ustar debian/patches/prevent_google_addsense.patch0000644000000000000000000000226512255267775016552 0ustar Author: Andreas Tille Last-Update: Sat, 21 Dec 2013 08:50:20 +0100 Description: Remove Google Addsense from user documentation to save user privacy --- qiime-1.8.0+dfsg.orig.orig/doc/_templates/layout.html +++ qiime-1.8.0+dfsg.orig/doc/_templates/layout.html @@ -2,8 +2,6 @@ {% block extrahead %} - - {% endblock %} {% block relbar1 %} @@ -43,12 +41,8 @@
{{ super() }} - -{% endblock %} \ No newline at end of file +{% endblock %} debian/patches/series0000644000000000000000000000031512256762071012041 0ustar check_config_file_in_new_location.patch fix_binary_helper_location.patch make_qiime_accept_new_rdp_classifier fix_path_for_support_files prevent_google_addsense.patch exclude_tests_that_need_to_fail.patch debian/patches/fix_path_for_support_files0000644000000000000000000000114112255246611016167 0ustar This may be a much simpler fix than patching every single mention of support_files, but I'm not sure what else this function is used to find? --- a/qiime/util.py +++ b/qiime/util.py @@ -302,8 +302,9 @@ def get_qiime_project_dir(): # Get the directory containing util.py current_dir_path = dirname(current_file_path) # Return the directory containing the directory containing util.py - return dirname(current_dir_path) - + # In Debian what we actually want is /usr/lib/[qiime] + return "/usr/lib" + def get_qiime_scripts_dir(): """ Returns the QIIME scripts directory debian/patches/check_config_file_in_new_location.patch0000644000000000000000000000134112255246623020474 0ustar We've moved the default config file, so tell this script to look in the new location. See the next patch for other path fixes related to support_files --- a/scripts/print_qiime_config.py +++ b/scripts/print_qiime_config.py @@ -210,8 +210,7 @@ class QIIMEConfig(TestCase): """local qiime_config has no extra params""" qiime_project_dir = get_qiime_project_dir() - orig_config = parse_qiime_config_file(open(qiime_project_dir + - '/qiime/support_files/qiime_config')) + orig_config = parse_qiime_config_file(open('/etc/qiime/qiime_config')) #check the env qiime_config qiime_config_env_filepath = getenv('QIIME_CONFIG_FP') debian/patches/make_qiime_accept_new_rdp_classifier0000644000000000000000000000361412255246616020117 0ustar This patch twists QIIME's arm to accept running a newer version of the RDP classifier by setting RDP_JAR_VERSION_OK, which is done by the QIIME wrapper. This is a nasty hack and hopefully the patch can be dropped for QIIME 1.6 --- a/qiime/assign_taxonomy.py +++ b/qiime/assign_taxonomy.py @@ -65,14 +65,20 @@ def validate_rdp_version(rdp_jarpath=Non "http://qiime.org/install/install.html#rdp-install" ) - rdp_jarname = os.path.basename(rdp_jarpath) - version_match = re.search("\d\.\d", rdp_jarname) - if version_match is None: - raise RuntimeError( - "Unable to detect RDP Classifier version in file %s" % rdp_jarname - ) + #Patch for Bio-Linux/Debian. Allow us to reassure QIIME about the version + #of RDP Classifier using an environment variable. + if os.getenv('RDP_JAR_VERSION_OK') is not None : + version = os.getenv('RDP_JAR_VERSION_OK') + else : + rdp_jarname = os.path.basename(rdp_jarpath) + version_match = re.search("\d\.\d", rdp_jarname) + if version_match is None: + raise RuntimeError( + "Unable to detect RDP Classifier version in file %s" % rdp_jarname + ) + + version = float(version_match.group()) - version = float(version_match.group()) if version < 2.1: raise RuntimeError( "RDP Classifier does not look like version 2.2 or greater." --- a/scripts/assign_taxonomy.py +++ b/scripts/assign_taxonomy.py @@ -301,6 +301,11 @@ def main(): params['training_data_properties_fp'] = opts.training_data_properties_fp params['max_memory'] = "%sM" % opts.rdp_max_memory + #Record actual RDP version. This shouldn't fail as it was called once + #already. + params['real_rdp_version'] = str(validate_rdp_version()) + + elif assignment_method == 'rtax': params['id_to_taxonomy_fp'] = opts.id_to_taxonomy_fp params['reference_sequences_fp'] = opts.reference_seqs_fp debian/patches/exclude_tests_that_need_to_fail.patch0000644000000000000000000012201312257135266020231 0ustar Author: Andreas Tille Last-Update: Thu, 26 Dec 2013 07:39:57 +0100 Description: Exclude tests that need to fail uclust is non-free and can not be packaged. The QIIME package just contains a wrapper telling this fact the user and thus the tests will fail. To avoid useless failures these tests are excluded. --- a/tests/test_align_seqs.py +++ b/tests/test_align_seqs.py @@ -134,20 +134,20 @@ class InfernalAlignerTests(SharedSetupTe LoadSeqs(data=infernal_test1_expected_alignment,aligned=Alignment,\ moltype=DNA) - def test_call_infernal_test1_file_output(self): - """InfernalAligner writes correct output files for infernal_test1 seqs - """ - # do not collect results; check output files instead - actual = self.infernal_test1_aligner(\ - self.infernal_test1_input_fp, result_path=self.result_fp, - log_path=self.log_fp) - - self.assertTrue(actual == None,\ - "Result should be None when result path provided.") - - expected_aln = self.infernal_test1_expected_aln - actual_aln = LoadSeqs(self.result_fp,aligned=Alignment) - self.assertEqual(actual_aln,expected_aln) +# def test_call_infernal_test1_file_output(self): +# """InfernalAligner writes correct output files for infernal_test1 seqs +# """ +# # do not collect results; check output files instead +# actual = self.infernal_test1_aligner(\ +# self.infernal_test1_input_fp, result_path=self.result_fp, +# log_path=self.log_fp) +# +# self.assertTrue(actual == None,\ +# "Result should be None when result path provided.") +# +# expected_aln = self.infernal_test1_expected_aln +# actual_aln = LoadSeqs(self.result_fp,aligned=Alignment) +# self.assertEqual(actual_aln,expected_aln) def test_call_infernal_test1(self): """InfernalAligner: functions as expected when returing objects @@ -221,84 +221,84 @@ class PyNastAlignerTests(SharedSetupTest self.pynast_test1_expected_fail = \ LoadSeqs(data=pynast_test1_expected_failure,aligned=False) - def test_call_pynast_test1_file_output(self): - """PyNastAligner writes correct output files for pynast_test1 seqs - """ - # do not collect results; check output files instead - actual = self.pynast_test1_aligner(\ - self.pynast_test1_input_fp, result_path=self.result_fp, - log_path=self.log_fp, failure_path=self.failure_fp) - - self.assertTrue(actual == None,\ - "Result should be None when result path provided.") - - expected_aln = self.pynast_test1_expected_aln - actual_aln = LoadSeqs(self.result_fp,aligned=DenseAlignment) - self.assertEqual(actual_aln,expected_aln) - - actual_fail = LoadSeqs(self.failure_fp,aligned=False) - self.assertEqual(actual_fail.toFasta(),\ - self.pynast_test1_expected_fail.toFasta()) - - - def test_call_pynast_test1_file_output_alt_params(self): - """PyNastAligner writes correct output files when no seqs align - """ - aligner = PyNastAligner({ - 'template_filepath': self.pynast_test1_template_fp, - 'min_len':1000}) - - actual = aligner(\ - self.pynast_test1_input_fp, result_path=self.result_fp, - log_path=self.log_fp, failure_path=self.failure_fp) - - self.assertTrue(actual == None,\ - "Result should be None when result path provided.") - - self.assertEqual(getsize(self.result_fp),0,\ - "No alignable seqs should result in an empty file.") - - # all seqs reported to fail - actual_fail = LoadSeqs(self.failure_fp,aligned=False) - self.assertEqual(actual_fail.getNumSeqs(),3) - - def test_call_pynast_test1(self): - """PyNastAligner: functions as expected when returing objects - """ - actual_aln = self.pynast_test1_aligner(self.pynast_test1_input_fp) - expected_aln = self.pynast_test1_expected_aln - - expected_names = ['1 description field 1..23', '2 1..23'] - self.assertEqual(actual_aln.Names, expected_names) - self.assertEqual(actual_aln, expected_aln) - - def test_call_pynast_template_aln_with_dots(self): - """PyNastAligner: functions when template alignment contains dots - """ - pynast_aligner = PyNastAligner({ - 'template_filepath': self.pynast_test_template_w_dots_fp, - 'min_len': 15, - }) - actual_aln = pynast_aligner(self.pynast_test1_input_fp) - expected_aln = self.pynast_test1_expected_aln - - expected_names = ['1 description field 1..23', '2 1..23'] - self.assertEqual(actual_aln.Names, expected_names) - self.assertEqual(actual_aln, expected_aln) - - def test_call_pynast_template_aln_with_lower(self): - """PyNastAligner: functions when template alignment contains lower case - """ - pynast_aligner = PyNastAligner({ - 'template_filepath': self.pynast_test_template_w_lower_fp, - 'min_len': 15, - }) - actual_aln = pynast_aligner(self.pynast_test1_input_fp) - expected_aln = self.pynast_test1_expected_aln - - expected_names = ['1 description field 1..23', '2 1..23'] - self.assertEqual(actual_aln.Names, expected_names) - self.assertEqual(actual_aln, expected_aln) +# def test_call_pynast_test1_file_output(self): +# """PyNastAligner writes correct output files for pynast_test1 seqs +# """ +# # do not collect results; check output files instead +# actual = self.pynast_test1_aligner(\ +# self.pynast_test1_input_fp, result_path=self.result_fp, +# log_path=self.log_fp, failure_path=self.failure_fp) +# +# self.assertTrue(actual == None,\ +# "Result should be None when result path provided.") +# +# expected_aln = self.pynast_test1_expected_aln +# actual_aln = LoadSeqs(self.result_fp,aligned=DenseAlignment) +# self.assertEqual(actual_aln,expected_aln) +# +# actual_fail = LoadSeqs(self.failure_fp,aligned=False) +# self.assertEqual(actual_fail.toFasta(),\ +# self.pynast_test1_expected_fail.toFasta()) + + +# def test_call_pynast_test1_file_output_alt_params(self): +# """PyNastAligner writes correct output files when no seqs align +# """ +# aligner = PyNastAligner({ +# 'template_filepath': self.pynast_test1_template_fp, +# 'min_len':1000}) +# +# actual = aligner(\ +# self.pynast_test1_input_fp, result_path=self.result_fp, +# log_path=self.log_fp, failure_path=self.failure_fp) +# +# self.assertTrue(actual == None,\ +# "Result should be None when result path provided.") +# +# self.assertEqual(getsize(self.result_fp),0,\ +# "No alignable seqs should result in an empty file.") +# +# # all seqs reported to fail +# actual_fail = LoadSeqs(self.failure_fp,aligned=False) +# self.assertEqual(actual_fail.getNumSeqs(),3) + +# def test_call_pynast_test1(self): +# """PyNastAligner: functions as expected when returing objects +# """ +# actual_aln = self.pynast_test1_aligner(self.pynast_test1_input_fp) +# expected_aln = self.pynast_test1_expected_aln +# +# expected_names = ['1 description field 1..23', '2 1..23'] +# self.assertEqual(actual_aln.Names, expected_names) +# self.assertEqual(actual_aln, expected_aln) + +# def test_call_pynast_template_aln_with_dots(self): +# """PyNastAligner: functions when template alignment contains dots +# """ +# pynast_aligner = PyNastAligner({ +# 'template_filepath': self.pynast_test_template_w_dots_fp, +# 'min_len': 15, +# }) +# actual_aln = pynast_aligner(self.pynast_test1_input_fp) +# expected_aln = self.pynast_test1_expected_aln +# +# expected_names = ['1 description field 1..23', '2 1..23'] +# self.assertEqual(actual_aln.Names, expected_names) +# self.assertEqual(actual_aln, expected_aln) + +# def test_call_pynast_template_aln_with_lower(self): +# """PyNastAligner: functions when template alignment contains lower case +# """ +# pynast_aligner = PyNastAligner({ +# 'template_filepath': self.pynast_test_template_w_lower_fp, +# 'min_len': 15, +# }) +# actual_aln = pynast_aligner(self.pynast_test1_input_fp) +# expected_aln = self.pynast_test1_expected_aln +# +# expected_names = ['1 description field 1..23', '2 1..23'] +# self.assertEqual(actual_aln.Names, expected_names) +# self.assertEqual(actual_aln, expected_aln) def test_call_pynast_template_aln_with_U(self): """PyNastAligner: error message when template contains bad char @@ -309,43 +309,43 @@ class PyNastAlignerTests(SharedSetupTest }) self.assertRaises(KeyError,pynast_aligner,self.pynast_test1_input_fp) - def test_call_pynast_alt_pairwise_method(self): - """PyNastAligner: alternate pairwise alignment method produces correct alignment - """ - aligner = PyNastAligner({ - 'pairwise_alignment_method': 'muscle', - 'template_filepath': self.pynast_test1_template_fp, - 'min_len': 15, - }) - actual_aln = aligner(self.pynast_test1_input_fp) - expected_aln = self.pynast_test1_expected_aln - self.assertEqual(actual_aln, expected_aln) - - def test_call_pynast_test1_alt_min_len(self): - """PyNastAligner: returns no result when min_len too high - """ - aligner = PyNastAligner({ - 'template_filepath': self.pynast_test1_template_fp, - 'min_len':1000}) - - actual_aln = aligner(\ - self.pynast_test1_input_fp) - expected_aln = {} - - self.assertEqual(actual_aln, expected_aln) - - def test_call_pynast_test1_alt_min_pct(self): - """PyNastAligner: returns no result when min_pct too high - """ - aligner = PyNastAligner({ - 'template_filepath': self.pynast_test1_template_fp, - 'min_len':15, - 'min_pct':100.0}) - - actual_aln = aligner(self.pynast_test1_input_fp) - expected_aln = {} - - self.assertEqual(actual_aln, expected_aln) +# def test_call_pynast_alt_pairwise_method(self): +# """PyNastAligner: alternate pairwise alignment method produces correct alignment +# """ +# aligner = PyNastAligner({ +# 'pairwise_alignment_method': 'muscle', +# 'template_filepath': self.pynast_test1_template_fp, +# 'min_len': 15, +# }) +# actual_aln = aligner(self.pynast_test1_input_fp) +# expected_aln = self.pynast_test1_expected_aln +# self.assertEqual(actual_aln, expected_aln) + +# def test_call_pynast_test1_alt_min_len(self): +# """PyNastAligner: returns no result when min_len too high +# """ +# aligner = PyNastAligner({ +# 'template_filepath': self.pynast_test1_template_fp, +# 'min_len':1000}) +# +# actual_aln = aligner(\ +# self.pynast_test1_input_fp) +# expected_aln = {} +# +# self.assertEqual(actual_aln, expected_aln) + +# def test_call_pynast_test1_alt_min_pct(self): +# """PyNastAligner: returns no result when min_pct too high +# """ +# aligner = PyNastAligner({ +# 'template_filepath': self.pynast_test1_template_fp, +# 'min_len':15, +# 'min_pct':100.0}) +# +# actual_aln = aligner(self.pynast_test1_input_fp) +# expected_aln = {} +# +# self.assertEqual(actual_aln, expected_aln) def tearDown(self): """ --- a/tests/test_assign_taxonomy.py +++ b/tests/test_assign_taxonomy.py @@ -134,53 +134,53 @@ class UclustConsensusTaxonAssignerTests( if exists(d): rmtree(d) - def test_uclust_assigner_write_to_file(self): - """UclustConsensusTaxonAssigner returns without error, writing results - """ - params = {'id_to_taxonomy_fp':self.id_to_tax1_fp, - 'reference_sequences_fp':self.refseqs1_fp} - - t = UclustConsensusTaxonAssigner(params) - result = t(seq_path=self.inseqs1_fp, - result_path=self.output_txt_fp, - uc_path=self.output_uc_fp, - log_path=self.output_log_fp) - del t - # result files exist after the UclustConsensusTaxonAssigner - # no longer exists - self.assertTrue(exists(self.output_txt_fp)) - self.assertTrue(exists(self.output_uc_fp)) - self.assertTrue(exists(self.output_log_fp)) - - # check that result has the expected lines - output_lines = list(open(self.output_txt_fp,'U')) - self.assertTrue('q1\tA;F;G\t1.00\t1\n' in output_lines) - self.assertTrue('q2\tA;H;I;J\t1.00\t1\n' in output_lines) +# def test_uclust_assigner_write_to_file(self): +# """UclustConsensusTaxonAssigner returns without error, writing results +# """ +# params = {'id_to_taxonomy_fp':self.id_to_tax1_fp, +# 'reference_sequences_fp':self.refseqs1_fp} +# +# t = UclustConsensusTaxonAssigner(params) +# result = t(seq_path=self.inseqs1_fp, +# result_path=self.output_txt_fp, +# uc_path=self.output_uc_fp, +# log_path=self.output_log_fp) +# del t +# # result files exist after the UclustConsensusTaxonAssigner +# # no longer exists +# self.assertTrue(exists(self.output_txt_fp)) +# self.assertTrue(exists(self.output_uc_fp)) +# self.assertTrue(exists(self.output_log_fp)) +# +# # check that result has the expected lines +# output_lines = list(open(self.output_txt_fp,'U')) +# self.assertTrue('q1\tA;F;G\t1.00\t1\n' in output_lines) +# self.assertTrue('q2\tA;H;I;J\t1.00\t1\n' in output_lines) - def test_uclust_assigner(self): - """UclustConsensusTaxonAssigner returns without error, returning dict - """ - params = {'id_to_taxonomy_fp':self.id_to_tax1_fp, - 'reference_sequences_fp':self.refseqs1_fp} - - t = UclustConsensusTaxonAssigner(params) - result = t(seq_path=self.inseqs1_fp, - result_path=None, - uc_path=self.output_uc_fp, - log_path=self.output_log_fp) - - self.assertEqual(result['q1'],(['A','F','G'],1.0,1)) - self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1)) - - # no result paths provided - t = UclustConsensusTaxonAssigner(params) - result = t(seq_path=self.inseqs1_fp, - result_path=None, - uc_path=None, - log_path=None) - - self.assertEqual(result['q1'],(['A','F','G'],1.0,1)) - self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1)) +# def test_uclust_assigner(self): +# """UclustConsensusTaxonAssigner returns without error, returning dict +# """ +# params = {'id_to_taxonomy_fp':self.id_to_tax1_fp, +# 'reference_sequences_fp':self.refseqs1_fp} +# +# t = UclustConsensusTaxonAssigner(params) +# result = t(seq_path=self.inseqs1_fp, +# result_path=None, +# uc_path=self.output_uc_fp, +# log_path=self.output_log_fp) +# +# self.assertEqual(result['q1'],(['A','F','G'],1.0,1)) +# self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1)) +# +# # no result paths provided +# t = UclustConsensusTaxonAssigner(params) +# result = t(seq_path=self.inseqs1_fp, +# result_path=None, +# uc_path=None, +# log_path=None) +# +# self.assertEqual(result['q1'],(['A','F','G'],1.0,1)) +# self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1)) def test_get_consensus_assignment(self): """_get_consensus_assignment fuctions as expected """ --- a/tests/test_pick_otus.py +++ b/tests/test_pick_otus.py @@ -2478,40 +2478,40 @@ class UclustOtuPickerTests(TestCase): exp = {0:['s1','s4','s6','s2','s3','s5']} self.assertEqual(obs,exp) - def test_abundance_sort(self): - """UclustOtuPicker: abundance sort functions as expected - """ - #enable abundance sorting with suppress sort = False (it gets - # set to True internally, otherwise uclust's length sort would - # override the abundance sorting) - seqs = [('s1 comment1','ACCTTGTTACTTT'), # three copies - ('s2 comment2','ACCTTGTTACTTTC'), # one copy - ('s3 comment3','ACCTTGTTACTTTCC'),# two copies - ('s4 comment4','ACCTTGTTACTTT'), - ('s5 comment5','ACCTTGTTACTTTCC'), - ('s6 comment6','ACCTTGTTACTTT')] - seqs_fp = self.seqs_to_temp_fasta(seqs) - - # abundance sorting changes order - app = UclustOtuPicker(params={'Similarity':0.80, - 'enable_rev_strand_matching':False, - 'suppress_sort':False, - 'presort_by_abundance':True, - 'save_uc_files':False}) - obs = app(seqs_fp) - exp = {0:['s1','s4','s6','s3','s5','s2']} - self.assertEqual(obs,exp) - - # abundance sorting changes order -- same results with suppress_sort = - # True b/c (it gets set to True to when presorting by abundance) - app = UclustOtuPicker(params={'Similarity':0.80, - 'enable_rev_strand_matching':False, - 'suppress_sort':True, - 'presort_by_abundance':True, - 'save_uc_files':False}) - obs = app(seqs_fp) - exp = {0:['s1','s4','s6','s3','s5','s2']} - self.assertEqual(obs,exp) +# def test_abundance_sort(self): +# """UclustOtuPicker: abundance sort functions as expected +# """ +# #enable abundance sorting with suppress sort = False (it gets +# # set to True internally, otherwise uclust's length sort would +# # override the abundance sorting) +# seqs = [('s1 comment1','ACCTTGTTACTTT'), # three copies +# ('s2 comment2','ACCTTGTTACTTTC'), # one copy +# ('s3 comment3','ACCTTGTTACTTTCC'),# two copies +# ('s4 comment4','ACCTTGTTACTTT'), +# ('s5 comment5','ACCTTGTTACTTTCC'), +# ('s6 comment6','ACCTTGTTACTTT')] +# seqs_fp = self.seqs_to_temp_fasta(seqs) +# +# # abundance sorting changes order +# app = UclustOtuPicker(params={'Similarity':0.80, +# 'enable_rev_strand_matching':False, +# 'suppress_sort':False, +# 'presort_by_abundance':True, +# 'save_uc_files':False}) +# obs = app(seqs_fp) +# exp = {0:['s1','s4','s6','s3','s5','s2']} +# self.assertEqual(obs,exp) +# +# # abundance sorting changes order -- same results with suppress_sort = +# # True b/c (it gets set to True to when presorting by abundance) +# app = UclustOtuPicker(params={'Similarity':0.80, +# 'enable_rev_strand_matching':False, +# 'suppress_sort':True, +# 'presort_by_abundance':True, +# 'save_uc_files':False}) +# obs = app(seqs_fp) +# exp = {0:['s1','s4','s6','s3','s5','s2']} +# self.assertEqual(obs,exp) def test_call_default_params(self): """UclustOtuPicker.__call__ returns expected clusters default params""" @@ -2542,35 +2542,35 @@ class UclustOtuPickerTests(TestCase): self.assertEqual(obs_otu_ids, exp_otu_ids) self.assertEqual(obs_clusters, exp_clusters) - def test_call_default_params_suppress_sort(self): - """UclustOtuPicker.__call__ returns expected clusters default params""" - - # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs - - exp_otu_ids = range(10) - exp_clusters = [['uclust_test_seqs_0'], - ['uclust_test_seqs_1'], - ['uclust_test_seqs_2'], - ['uclust_test_seqs_3'], - ['uclust_test_seqs_4'], - ['uclust_test_seqs_5'], - ['uclust_test_seqs_6'], - ['uclust_test_seqs_7'], - ['uclust_test_seqs_8'], - ['uclust_test_seqs_9']] - - app = UclustOtuPicker(params={'save_uc_files':False, - 'suppress_sort':True}) - obs = app(self.tmp_seq_filepath1) - obs_otu_ids = obs.keys() - obs_otu_ids.sort() - obs_clusters = obs.values() - obs_clusters.sort() - # The relation between otu ids and clusters is abitrary, and - # is not stable due to use of dicts when parsing clusters -- therefore - # just checks that we have the expected group of each - self.assertEqual(obs_otu_ids, exp_otu_ids) - self.assertEqual(obs_clusters, exp_clusters) +# def test_call_default_params_suppress_sort(self): +# """UclustOtuPicker.__call__ returns expected clusters default params""" +# +# # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs +# +# exp_otu_ids = range(10) +# exp_clusters = [['uclust_test_seqs_0'], +# ['uclust_test_seqs_1'], +# ['uclust_test_seqs_2'], +# ['uclust_test_seqs_3'], +# ['uclust_test_seqs_4'], +# ['uclust_test_seqs_5'], +# ['uclust_test_seqs_6'], +# ['uclust_test_seqs_7'], +# ['uclust_test_seqs_8'], +# ['uclust_test_seqs_9']] +# +# app = UclustOtuPicker(params={'save_uc_files':False, +# 'suppress_sort':True}) +# obs = app(self.tmp_seq_filepath1) +# obs_otu_ids = obs.keys() +# obs_otu_ids.sort() +# obs_clusters = obs.values() +# obs_clusters.sort() +# # The relation between otu ids and clusters is abitrary, and +# # is not stable due to use of dicts when parsing clusters -- therefore +# # just checks that we have the expected group of each +# self.assertEqual(obs_otu_ids, exp_otu_ids) +# self.assertEqual(obs_clusters, exp_clusters) def test_call_default_params_save_uc_file(self): @@ -2627,69 +2627,69 @@ class UclustOtuPickerTests(TestCase): self.assertEqual(obs_otu_ids, exp_otu_ids) self.assertEqual(obs_clusters, exp_clusters) - def test_call_alt_threshold(self): - """UclustOtuPicker.__call__ returns expected clusters with alt threshold - """ - # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs - - exp_otu_ids = range(9) - exp_clusters = [['uclust_test_seqs_0'], - ['uclust_test_seqs_1'], - ['uclust_test_seqs_2'], - ['uclust_test_seqs_3'], - ['uclust_test_seqs_4'], - ['uclust_test_seqs_5'], - ['uclust_test_seqs_6','uclust_test_seqs_8'], - ['uclust_test_seqs_7'], - ['uclust_test_seqs_9']] - - app = UclustOtuPicker(params={'Similarity':0.90, - 'suppress_sort':False, - 'presort_by_abundance':False, - 'save_uc_files':False}) - obs = app(self.tmp_seq_filepath1) - obs_otu_ids = obs.keys() - obs_otu_ids.sort() - obs_clusters = obs.values() - obs_clusters.sort() - # The relation between otu ids and clusters is abitrary, and - # is not stable due to use of dicts when parsing clusters -- therefore - # just checks that we have the expected group of each - self.assertEqual(obs_otu_ids, exp_otu_ids) - self.assertEqual(obs_clusters, exp_clusters) - - - def test_call_otu_id_prefix(self): - """UclustOtuPicker.__call__ returns expected clusters with alt threshold - """ - # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs - - exp_otu_ids = ['my_otu_%d' % i for i in range(9)] - exp_clusters = [['uclust_test_seqs_0'], - ['uclust_test_seqs_1'], - ['uclust_test_seqs_2'], - ['uclust_test_seqs_3'], - ['uclust_test_seqs_4'], - ['uclust_test_seqs_5'], - ['uclust_test_seqs_6','uclust_test_seqs_8'], - ['uclust_test_seqs_7'], - ['uclust_test_seqs_9']] - - app = UclustOtuPicker(params={'Similarity':0.90, - 'suppress_sort':False, - 'presort_by_abundance':False, - 'new_cluster_identifier':'my_otu_', - 'save_uc_files':False}) - obs = app(self.tmp_seq_filepath1) - obs_otu_ids = obs.keys() - obs_otu_ids.sort() - obs_clusters = obs.values() - obs_clusters.sort() - # The relation between otu ids and clusters is abitrary, and - # is not stable due to use of dicts when parsing clusters -- therefore - # just checks that we have the expected group of each - self.assertEqual(obs_otu_ids, exp_otu_ids) - self.assertEqual(obs_clusters, exp_clusters) +# def test_call_alt_threshold(self): +# """UclustOtuPicker.__call__ returns expected clusters with alt threshold +# """ +# # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs +# +# exp_otu_ids = range(9) +# exp_clusters = [['uclust_test_seqs_0'], +# ['uclust_test_seqs_1'], +# ['uclust_test_seqs_2'], +# ['uclust_test_seqs_3'], +# ['uclust_test_seqs_4'], +# ['uclust_test_seqs_5'], +# ['uclust_test_seqs_6','uclust_test_seqs_8'], +# ['uclust_test_seqs_7'], +# ['uclust_test_seqs_9']] +# +# app = UclustOtuPicker(params={'Similarity':0.90, +# 'suppress_sort':False, +# 'presort_by_abundance':False, +# 'save_uc_files':False}) +# obs = app(self.tmp_seq_filepath1) +# obs_otu_ids = obs.keys() +# obs_otu_ids.sort() +# obs_clusters = obs.values() +# obs_clusters.sort() +# # The relation between otu ids and clusters is abitrary, and +# # is not stable due to use of dicts when parsing clusters -- therefore +# # just checks that we have the expected group of each +# self.assertEqual(obs_otu_ids, exp_otu_ids) +# self.assertEqual(obs_clusters, exp_clusters) + + +# def test_call_otu_id_prefix(self): +# """UclustOtuPicker.__call__ returns expected clusters with alt threshold +# """ +# # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs +# +# exp_otu_ids = ['my_otu_%d' % i for i in range(9)] +# exp_clusters = [['uclust_test_seqs_0'], +# ['uclust_test_seqs_1'], +# ['uclust_test_seqs_2'], +# ['uclust_test_seqs_3'], +# ['uclust_test_seqs_4'], +# ['uclust_test_seqs_5'], +# ['uclust_test_seqs_6','uclust_test_seqs_8'], +# ['uclust_test_seqs_7'], +# ['uclust_test_seqs_9']] +# +# app = UclustOtuPicker(params={'Similarity':0.90, +# 'suppress_sort':False, +# 'presort_by_abundance':False, +# 'new_cluster_identifier':'my_otu_', +# 'save_uc_files':False}) +# obs = app(self.tmp_seq_filepath1) +# obs_otu_ids = obs.keys() +# obs_otu_ids.sort() +# obs_clusters = obs.values() +# obs_clusters.sort() +# # The relation between otu ids and clusters is abitrary, and +# # is not stable due to use of dicts when parsing clusters -- therefore +# # just checks that we have the expected group of each +# self.assertEqual(obs_otu_ids, exp_otu_ids) +# self.assertEqual(obs_clusters, exp_clusters) def test_call_suppress_sort(self): """UclustOtuPicker.__call__ handles suppress sort @@ -2716,131 +2716,131 @@ class UclustOtuPickerTests(TestCase): self.assertEqual(obs_otu_ids, exp_otu_ids) self.assertEqual(obs_clusters, exp_clusters) - def test_call_rev_matching(self): - """UclustOtuPicker.__call__ handles reverse strand matching - """ - exp_otu_ids = range(2) - exp_clusters = [['uclust_test_seqs_0'],['uclust_test_seqs_0_rc']] - app = UclustOtuPicker(params={'Similarity':0.90, - 'enable_rev_strand_matching':False, - 'suppress_sort':False, - 'presort_by_abundance':False, - 'save_uc_files':False}) - obs = app(self.tmp_seq_filepath3) - obs_otu_ids = obs.keys() - obs_otu_ids.sort() - obs_clusters = obs.values() - obs_clusters.sort() - # The relation between otu ids and clusters is abitrary, and - # is not stable due to use of dicts when parsing clusters -- therefore - # just checks that we have the expected group of each - self.assertEqual(obs_otu_ids, exp_otu_ids) - self.assertEqual(obs_clusters, exp_clusters) - - exp = {0: ['uclust_test_seqs_0','uclust_test_seqs_0_rc']} - app = UclustOtuPicker(params={'Similarity':0.90, - 'enable_rev_strand_matching':True, - 'suppress_sort':False, - 'presort_by_abundance':False, - 'save_uc_files':False}) - obs = app(self.tmp_seq_filepath3) - self.assertEqual(obs, exp) - - def test_call_output_to_file(self): - """UclustHitOtuPicker.__call__ output to file functions as expected - """ - - tmp_result_filepath = get_tmp_filename(\ - prefix='UclustOtuPickerTest.test_call_output_to_file_',\ - suffix='.txt') - - app = UclustOtuPicker(params={'Similarity':0.90, - 'suppress_sort':False, - 'presort_by_abundance':False, - 'save_uc_files':False}) - obs = app(self.tmp_seq_filepath1,result_path=tmp_result_filepath) - - result_file = open(tmp_result_filepath) - result_file_str = result_file.read() - result_file.close() - # remove the result file before running the test, so in - # case it fails the temp file is still cleaned up - remove(tmp_result_filepath) - - exp_otu_ids = map(str,range(9)) - exp_clusters = [['uclust_test_seqs_0'], - ['uclust_test_seqs_1'], - ['uclust_test_seqs_2'], - ['uclust_test_seqs_3'], - ['uclust_test_seqs_4'], - ['uclust_test_seqs_5'], - ['uclust_test_seqs_6','uclust_test_seqs_8'], - ['uclust_test_seqs_7'], - ['uclust_test_seqs_9']] - obs_otu_ids = [] - obs_clusters = [] - for line in result_file_str.split('\n'): - if line: - fields = line.split('\t') - obs_otu_ids.append(fields[0]) - obs_clusters.append(fields[1:]) - obs_otu_ids.sort() - obs_clusters.sort() - # The relation between otu ids and clusters is abitrary, and - # is not stable due to use of dicts when parsing clusters -- therefore - # just checks that we have the expected group of each - self.assertEqual(obs_otu_ids, exp_otu_ids) - self.assertEqual(obs_clusters, exp_clusters) - # confirm that nothing is returned when result_path is specified - self.assertEqual(obs,None) - - def test_call_log_file(self): - """UclustOtuPicker.__call__ writes log when expected - """ - - tmp_log_filepath = get_tmp_filename(\ - prefix='UclustOtuPickerTest.test_call_output_to_file_l_',\ - suffix='.txt') - tmp_result_filepath = get_tmp_filename(\ - prefix='UclustOtuPickerTest.test_call_output_to_file_r_',\ - suffix='.txt') - - app = UclustOtuPicker(params={'Similarity':0.99, - 'save_uc_files':False}) - obs = app(self.tmp_seq_filepath1,\ - result_path=tmp_result_filepath,log_path=tmp_log_filepath) - - log_file = open(tmp_log_filepath) - log_file_str = log_file.read() - log_file.close() - # remove the temp files before running the test, so in - # case it fails the temp file is still cleaned up - remove(tmp_log_filepath) - remove(tmp_result_filepath) - - log_file_99_exp = ["UclustOtuPicker parameters:", - "Similarity:0.99","Application:uclust", - "enable_rev_strand_matching:False", - "suppress_sort:True", - "optimal:False", - 'max_accepts:20', - 'max_rejects:500', - 'stepwords:20', - 'word_length:12', - "exact:False", - "Num OTUs:10", - "new_cluster_identifier:None", - "presort_by_abundance:True", - "stable_sort:True", - "output_dir:.", - "save_uc_files:False", - "prefilter_identical_sequences:True", - "Result path: %s" % tmp_result_filepath] - # compare data in log file to fake expected log file - # NOTE: Since app.params is a dict, the order of lines is not - # guaranteed, so testing is performed to make sure that - # the equal unordered lists of lines is present in actual and expected - self.assertEqualItems(log_file_str.split('\n'), log_file_99_exp) +# def test_call_rev_matching(self): +# """UclustOtuPicker.__call__ handles reverse strand matching +# """ +# exp_otu_ids = range(2) +# exp_clusters = [['uclust_test_seqs_0'],['uclust_test_seqs_0_rc']] +# app = UclustOtuPicker(params={'Similarity':0.90, +# 'enable_rev_strand_matching':False, +# 'suppress_sort':False, +# 'presort_by_abundance':False, +# 'save_uc_files':False}) +# obs = app(self.tmp_seq_filepath3) +# obs_otu_ids = obs.keys() +# obs_otu_ids.sort() +# obs_clusters = obs.values() +# obs_clusters.sort() +# # The relation between otu ids and clusters is abitrary, and +# # is not stable due to use of dicts when parsing clusters -- therefore +# # just checks that we have the expected group of each +# self.assertEqual(obs_otu_ids, exp_otu_ids) +# self.assertEqual(obs_clusters, exp_clusters) +# +# exp = {0: ['uclust_test_seqs_0','uclust_test_seqs_0_rc']} +# app = UclustOtuPicker(params={'Similarity':0.90, +# 'enable_rev_strand_matching':True, +# 'suppress_sort':False, +# 'presort_by_abundance':False, +# 'save_uc_files':False}) +# obs = app(self.tmp_seq_filepath3) +# self.assertEqual(obs, exp) + +# def test_call_output_to_file(self): +# """UclustHitOtuPicker.__call__ output to file functions as expected +# """ +# +# tmp_result_filepath = get_tmp_filename(\ +# prefix='UclustOtuPickerTest.test_call_output_to_file_',\ +# suffix='.txt') +# +# app = UclustOtuPicker(params={'Similarity':0.90, +# 'suppress_sort':False, +# 'presort_by_abundance':False, +# 'save_uc_files':False}) +# obs = app(self.tmp_seq_filepath1,result_path=tmp_result_filepath) +# +# result_file = open(tmp_result_filepath) +# result_file_str = result_file.read() +# result_file.close() +# # remove the result file before running the test, so in +# # case it fails the temp file is still cleaned up +# remove(tmp_result_filepath) +# +# exp_otu_ids = map(str,range(9)) +# exp_clusters = [['uclust_test_seqs_0'], +# ['uclust_test_seqs_1'], +# ['uclust_test_seqs_2'], +# ['uclust_test_seqs_3'], +# ['uclust_test_seqs_4'], +# ['uclust_test_seqs_5'], +# ['uclust_test_seqs_6','uclust_test_seqs_8'], +# ['uclust_test_seqs_7'], +# ['uclust_test_seqs_9']] +# obs_otu_ids = [] +# obs_clusters = [] +# for line in result_file_str.split('\n'): +# if line: +# fields = line.split('\t') +# obs_otu_ids.append(fields[0]) +# obs_clusters.append(fields[1:]) +# obs_otu_ids.sort() +# obs_clusters.sort() +# # The relation between otu ids and clusters is abitrary, and +# # is not stable due to use of dicts when parsing clusters -- therefore +# # just checks that we have the expected group of each +# self.assertEqual(obs_otu_ids, exp_otu_ids) +# self.assertEqual(obs_clusters, exp_clusters) +# # confirm that nothing is returned when result_path is specified +# self.assertEqual(obs,None) + +# def test_call_log_file(self): +# """UclustOtuPicker.__call__ writes log when expected +# """ +# +# tmp_log_filepath = get_tmp_filename(\ +# prefix='UclustOtuPickerTest.test_call_output_to_file_l_',\ +# suffix='.txt') +# tmp_result_filepath = get_tmp_filename(\ +# prefix='UclustOtuPickerTest.test_call_output_to_file_r_',\ +# suffix='.txt') +# +# app = UclustOtuPicker(params={'Similarity':0.99, +# 'save_uc_files':False}) +# obs = app(self.tmp_seq_filepath1,\ +# result_path=tmp_result_filepath,log_path=tmp_log_filepath) +# +# log_file = open(tmp_log_filepath) +# log_file_str = log_file.read() +# log_file.close() +# # remove the temp files before running the test, so in +# # case it fails the temp file is still cleaned up +# remove(tmp_log_filepath) +# remove(tmp_result_filepath) +# +# log_file_99_exp = ["UclustOtuPicker parameters:", +# "Similarity:0.99","Application:uclust", +# "enable_rev_strand_matching:False", +# "suppress_sort:True", +# "optimal:False", +# 'max_accepts:20', +# 'max_rejects:500', +# 'stepwords:20', +# 'word_length:12', +# "exact:False", +# "Num OTUs:10", +# "new_cluster_identifier:None", +# "presort_by_abundance:True", +# "stable_sort:True", +# "output_dir:.", +# "save_uc_files:False", +# "prefilter_identical_sequences:True", +# "Result path: %s" % tmp_result_filepath] +# # compare data in log file to fake expected log file +# # NOTE: Since app.params is a dict, the order of lines is not +# # guaranteed, so testing is performed to make sure that +# # the equal unordered lists of lines is present in actual and expected +# self.assertEqualItems(log_file_str.split('\n'), log_file_99_exp) def test_map_filtered_clusters_to_full_clusters(self): debian/patches/fix_binary_helper_location.patch0000644000000000000000000000077512255246621017235 0ustar --- a/qiime/denoiser/utils.py +++ b/qiime/denoiser/utils.py @@ -60,8 +60,9 @@ def get_denoiser_data_dir(): def get_flowgram_ali_exe(): """Return the path to the flowgram alignment prog """ - fp = get_qiime_scripts_dir() + "/FlowgramAli_4frame" - return fp + #fp = get_qiime_scripts_dir() + "/FlowgramAli_4frame" + #return fp + return "/usr/lib/qiime/support_files/denoiser/bin/FlowgramAli_4frame" def check_flowgram_ali_exe(): """Check if we have a working FlowgramAligner""" debian/watch0000644000000000000000000000020012255272730010214 0ustar version=3 opts=dversionmangle=s/[~\+]dfsg// \ https://github.com/qiime/qiime/releases /qiime/qiime/archive/([.0-9]+)\.tar\.gz debian/qiime-doc.install0000644000000000000000000000025512214301047012421 0ustar examples/ usr/share/doc/qiime doc/_build/* usr/share/doc/qiime doc/vb_files usr/share/doc/qiime qiime/support_files/denoiser/TestData usr/lib/qiime/support_files/denoiser debian/changelog0000644000000000000000000002345012260122610011034 0ustar qiime (1.8.0+dfsg-2) unstable; urgency=medium * debian/qiime.links: Fix links to new biom_format * debian/control: - New dependency: python-qcli (this should be dropped in version 2.0 of qiime) - Recommends: python-mpi4py since this is used in some tests * debian/rules: Remove remainings from running test suite -- Andreas Tille Thu, 26 Dec 2013 07:39:57 +0100 qiime (1.8.0+dfsg-1) unstable; urgency=low * New upstream version * Fixed debian/watch * Suggests: torque-client Closes: #721902 * debian/control: - cme fix dpkg-control - rdp-classifier can only be suggested since it is not in Debian and currently it seems that it can only go into non-free - Canonical Vcs URLs * debian/patches/prevent_google_addsense.patch: Remove Google Addsense from user documentation to save user privacy -- Andreas Tille Sat, 21 Dec 2013 08:50:20 +0100 qiime (1.7.0+dfsg-1) unstable; urgency=low * Upload preparations done for BioLinux to Debian -- Andreas Tille Mon, 17 Jun 2013 18:28:26 +0200 qiime (1.7.0-0biolinux1) precise; urgency=low * New usptream * Adopt new naming convention for bl releases * Add usearch script to match uclust warning message * Re-fix biom-format script aliases (as links, needed for script to see them) * Clean out old patches * Patch to fix significant typo in error message * Patch to stop warning for matplotlib * Depend on newer Pynast (Pynast now maintained along with QIIME) -- Tim Booth Tue, 11 Jun 2013 16:49:19 +0100 qiime (1.6.0-0ubuntu2) precise; urgency=low * Add some aliases so that the python-biom-format tools can be called with .py extensions inside the qiime shell, because this is consistent with what QIIME users expect. -- Tim Booth Thu, 02 May 2013 15:05:34 +0100 qiime (1.6.0-0ubuntu1) precise; urgency=low * New upstream release * Don't bother re-packing the tarball for PPA. Package now builds the same with re-packed or pristine. * Remove t-test bug patch as this is in upstream * Re-jig RDP version patch which is simplified but still needed * Note the upstream no longer includes an example parameters.txt -- Tim Booth Fri, 04 Jan 2013 12:51:49 +0000 qiime (1.5.0+repack-2ubuntu11) precise; urgency=low [Tim Booth] * The support_files patch only solved part of the problem, so trying a different, and simpler, tack. * Changes from Debian: [Andreas Tille] * debian/rules: Use xz compression for binary packages -- Tim Booth Tue, 20 Nov 2012 12:26:24 +0000 qiime (1.5.0+dfsg-1) unstable; urgency=low * debian/copyright: - Add Files-Excluded to document what was removed from original source * debian/watch: Enable '+dfsg' / '+repack' suffix in Debian versions * debian/control: - Remove DM-Upload-Allowed - Standards-Version: 3.9.4 (no changes needed) - Priority: optional (per Debian Med policy) - normalised format - Build-Depends: ghc (instead of ghc6) Closes: #710774 -- Andreas Tille Wed, 29 Aug 2012 14:30:02 +0200 qiime (1.5.0+repack-2ubuntu10) precise; urgency=low * Made QIIME accept RDP classifier >2.2 though the patch is a total hack. Solves issue reported by MJC. * Add patch to make QIIME look in the Debian standard location for king.jar and other support_files * Added patch to fix alpha diversity bug and tested by running test_compare_alpha_diversities.py * QIIME now needs python-cogent 1.5.3 -- Tim Booth Wed, 14 Nov 2012 09:59:24 +0000 qiime (1.5.0+repack-2ubuntu5) precise; urgency=low * Tweaked startup script to report the QIIME version -- Tim Booth Thu, 11 Oct 2012 13:49:35 +0100 qiime (1.5.0+repack-2ubuntu4) precise; urgency=low * Remove python26_trim_sff_primers.patch as Qiime now requires 2.7 anyway * Avoid calling dh_auto_clean at all as it triggers a partial rebuild * Added various support files in /usr/share/qiime/support_files * Added various symlinks as Qiime wants everything in one place * Set python-biom-format as a hard dependency because even built-in self-test won't start without it * Added patch to self-test where it looks for default config file * Modified to work with new RDP-classifier package -- Tim Booth Tue, 14 Aug 2012 17:53:13 +0100 qiime (1.5.0-2) unstable; urgency=low * debian/control: - Recommends: python-biom-format Closes: #682199 - Depends: python >= 2.7 Closes: #682198 * debian/get-orig-source: For next release use xz compression of repackaged source * debian/rules: use xz compression -- Andreas Tille Mon, 06 Aug 2012 09:22:42 +0200 qiime (1.5.0-1) unstable; urgency=low * New upstream version (adapted patches + remove patch that was applied upstream) * debian/get-orig-source: Strip *.jar file from upstream source which should never have been there * debian/upstream: - more complete authors information in BibTeX conform format - Moved DOI+PMID to References * debian/qiime.links: Link to king.jar * debian/control: Depends: king * debhelper 9 (control+compat) -- Andreas Tille Tue, 08 May 2012 18:07:56 +0200 qiime (1.4.0-2) unstable; urgency=low * debian/qiime.install: Include *.py files Closes: #668999 -- Andreas Tille Mon, 16 Apr 2012 15:39:02 +0200 qiime (1.4.0-1) unstable; urgency=low * New upstream version (updated patches) * debian/patches/fix_shebang_lines.patch: Removed because file to patch was removed upstream * debian/copyright: rewritten to confirm DEP5 and verified using cme fix dpkg-copyright -- Andreas Tille Tue, 20 Mar 2012 14:41:20 +0100 qiime (1.3.0-3) unstable; urgency=low [ Charles Plessy ] * renamed debian/upstream-metadata.yaml to debian/upstream [ Andreas Tille ] * Standards-Version: 3.9.3 (no changes needed) * debian/patches/ghc_7.4.2_compatibility.patch: Fix build problem based on illegal use of 'import System' (Thanks to Joachim Breitner ) for the patch Closes: #663889 -- Andreas Tille Sat, 17 Mar 2012 19:19:38 +0100 qiime (1.3.0-2) unstable; urgency=low * debian/control: - Added myself to uploaders - moved from contrib/science to science because there are no non-free components used any more - Removed duplicated entry ${misc:Depends} - Extra package qiime-doc arch all to avoid - Provides/Replaces/Conflicts denoiser - Fix Vcs fields - DM-Upload-Allowed: yes - Build-Depends: + python-all-dev (>= 2.6) (instead of python-central) + python-sphinx to create python documentation * debian/rules: - short dh notation instead of cdbs Closes: #639389 - allow overriding python2 by environemnt which helps backporters like Tim for BioLinux - use calculated $(pkg) variable instead of fixed string 'qiime' * debian/{rules,qiime.install}: Adapted to new binary package layout * debian/qiime.{doc,links}: Move documentation into place and use packaged JavaScript libraries * debian/qiime-doc.doc-base * debian/upstream-metadata.yaml -- Andreas Tille Tue, 29 Nov 2011 09:23:18 +0100 qiime (1.3.0-0ubuntu6) lucid; urgency=low * Attempt to fix some errors reported by print_qiime_config -t Fixes to environment, configuration and small patches -- Tim Booth Wed, 02 Nov 2011 18:06:37 +0000 qiime (1.3.0-1) unstable; urgency=low * Bringing qiime to Debian [Steffen] * Some reformatting of debian/control -- Tim Booth Fri, 26 Aug 2011 17:48:30 +0200 qiime (1.3.0-0ubuntu5) lucid; urgency=low * Very minor fix to scripts/shell/qiime_environment -- Tim Booth Tue, 16 Aug 2011 12:35:06 +0100 qiime (1.3.0-0ubuntu4) lucid; urgency=low * New upstream release. Needs newer Python-Cogent and includes built-in denoiser, so I have now aliased denoiser to run the qiime wrapper. * As the upstream now contains a tiny compiled package I've had to make it arch dependent (support_files/denoiser/FlowgramAlignment code written in Haskell). * Added build dep on ghc6 for the same reson -- Tim Booth Tue, 19 Jul 2011 17:38:28 +0100 qiime (1.2.1-ubuntu5) lucid; urgency=low * Fixed silly error in qiime wrapper script * Added uclust wrapper to deal with uclust being expected but missing -- Tim Booth Thu, 24 Mar 2011 16:52:43 +0000 qiime (1.2.1-ubuntu4) lucid; urgency=low * Fixed dependency - needs python-cogent >= 1.5 -- Tim Booth Thu, 24 Mar 2011 15:33:03 +0000 qiime (1.2.1-ubuntu3) lucid; urgency=low * Moved .py scripts out of /usr/bin * Added /usr/bin/qiime wrapper script as per Bio-Linux * Added default configuration file * Cleaned up documentation * Added many dependencies (some of which are still not packaged) -- Tim Booth Thu, 24 Mar 2011 11:35:58 +0000 qiime (1.2.1-ubuntu2) lucid; urgency=low * Pulled 1.2.1 source and tried a rebuild. * Fixed Lintian changelog warning. -- Tim Booth Tue, 22 Mar 2011 13:55:49 +0000 qiime (1.2.1-2) experimental; urgency=low * Initial release (Closes: #587275) * Fixed Python 2.6 incompatibility. -- Steffen Moeller Wed, 02 Mar 2011 22:33:45 +0100 qiime (1.2.1-1) experimental; urgency=low * Corrected home page * Updated policy to 3.9.1 * Updated debian/README.Debian -- Steffen Moeller Wed, 02 Mar 2011 18:19:39 +0100 qiime (1.1.0-1) experimental; urgency=low * Initial release. -- Sri Girish Srinivasa Murthy Tue, 20 Jul 2010 00:03:02 +0200 debian/qiime.install0000644000000000000000000000166112214301047011660 0ustar debian/scripts/qiime usr/bin debian/scripts/shell/* usr/lib/qiime/shell debian/scripts/shell/.[bz]* usr/lib/qiime/shell debian/scripts/qiime_config etc/qiime debian/scripts/uclust usr/lib/qiime/bin debian/scripts/usearch* usr/lib/qiime/bin debian/tmp/usr/bin/*.py usr/lib/qiime/bin qiime/support_files/denoiser/FlowgramAlignment/FlowgramAli_4frame usr/lib/qiime/support_files/denoiser/bin qiime/*.py usr/share/pyshared/qiime qiime/denoiser usr/share/pyshared/qiime qiime/parallel usr/share/pyshared/qiime qiime/workflow usr/share/pyshared/qiime qiime/pycogent_backports usr/share/pyshared/qiime qiime/support_files/denoiser/Data usr/share/qiime/support_files/denoiser qiime/support_files/css/* usr/share/qiime/support_files/css qiime/support_files/images/* usr/share/qiime/support_files/images qiime/support_files/R/* usr/share/qiime/support_files/R qiime/support_files/js/* usr/share/qiime/support_files/js debian/compat0000644000000000000000000000000212214301047010356 0ustar 9 debian/qiime-doc.links0000644000000000000000000000025712214301047012075 0ustar usr/share/javascript/jquery/jquery.js usr/share/doc/qiime/html/_static/jquery.js usr/share/javascript/underscore/underscore.js usr/share/doc/qiime/html/_static/underscore.js debian/control0000644000000000000000000001207412260123132010565 0ustar Source: qiime Maintainer: Debian Med Packaging Team Uploaders: Steffen Moeller , Tim Booth , Andreas Tille Section: science Priority: optional Build-Depends: debhelper (>= 9), python-all-dev (>= 2.7), python-cogent (>= 1.5.3), python-numpy, python-matplotlib, ghc, python-sphinx Standards-Version: 3.9.5 Vcs-Browser: http://anonscm.debian.org/viewvc/debian-med/trunk/packages/qiime/trunk/ Vcs-Svn: svn://anonscm.debian.org/debian-med/trunk/packages/qiime/trunk/ Homepage: http://www.qiime.org/ X-Python-Version: >= 2.7 Package: qiime Architecture: any Depends: ${shlibs:Depends}, ${misc:Depends}, ${python:Depends}, pynast (>= 1.2), python-cogent (>= 1.5.3), king, python-biom-format, python-qcli Recommends: blast2 | blast+-legacy, cd-hit, chimeraslayer, muscle, infernal, fasttree, ampliconnoise, python-matplotlib, python-numpy, python-mpi4py, libjs-jquery Suggests: t-coffee, cytoscape, rdp-classifier, torque-client Conflicts: denoiser Provides: denoiser Replaces: denoiser Description: Quantitative Insights Into Microbial Ecology QIIME (canonically pronounced ‘Chime’) is a pipeline for performing microbial community analysis that integrates many third party tools which have become standard in the field. A standard QIIME analysis begins with sequence data from one or more sequencing platforms, including * Sanger, * Roche/454, and * Illumina GAIIx. With all the underlying tools installed, of which not all are yet available in Debian (or any other Linux distribution), QIIME can perform * library de-multiplexing and quality filtering; * denoising with PyroNoise; * OTU and representative set picking with uclust, cdhit, mothur, BLAST, or other tools; * taxonomy assignment with BLAST or the RDP classifier; * sequence alignment with PyNAST, muscle, infernal, or other tools; * phylogeny reconstruction with FastTree, raxml, clearcut, or other tools; * alpha diversity and rarefaction, including visualization of results, using over 20 metrics including Phylogenetic Diversity, chao1, and observed species; * beta diversity and rarefaction, including visualization of results, using over 25 metrics including weighted and unweighted UniFrac, Euclidean distance, and Bray-Curtis; * summarization and visualization of taxonomic composition of samples using pie charts and histograms and many other features. . QIIME includes parallelization capabilities for many of the computationally intensive steps. By default, these are configured to utilize a mutli-core environment, and are easily configured to run in a cluster environment. QIIME is built in Python using the open-source PyCogent toolkit. It makes extensive use of unit tests, and is highly modular to facilitate custom analyses. Package: qiime-doc Architecture: all Section: doc Depends: ${misc:Depends}, libjs-jquery, libjs-underscore Description: Quantitative Insights Into Microbial Ecology (tutorial) QIIME (canonically pronounced ‘Chime’) is a pipeline for performing microbial community analysis that integrates many third party tools which have become standard in the field. A standard QIIME analysis begins with sequence data from one or more sequencing platforms, including * Sanger, * Roche/454, and * Illumina GAIIx. With all the underlying tools installed, of which not all are yet available in Debian (or any other Linux distribution), QIIME can perform * library de-multiplexing and quality filtering; * denoising with PyroNoise; * OTU and representative set picking with uclust, cdhit, mothur, BLAST, or other tools; * taxonomy assignment with BLAST or the RDP classifier; * sequence alignment with PyNAST, muscle, infernal, or other tools; * phylogeny reconstruction with FastTree, raxml, clearcut, or other tools; * alpha diversity and rarefaction, including visualization of results, using over 20 metrics including Phylogenetic Diversity, chao1, and observed species; * beta diversity and rarefaction, including visualization of results, using over 25 metrics including weighted and unweighted UniFrac, Euclidean distance, and Bray-Curtis; * summarization and visualization of taxonomic composition of samples using pie charts and histograms and many other features. . QIIME includes parallelization capabilities for many of the computationally intensive steps. By default, these are configured to utilize a mutli-core environment, and are easily configured to run in a cluster environment. QIIME is built in Python using the open-source PyCogent toolkit. It makes extensive use of unit tests, and is highly modular to facilitate custom analyses. . This package contains the documentation and a tutorial. debian/qiime.docs0000644000000000000000000000000712214301047011133 0ustar README