vision_opencv-1.12.3/000077500000000000000000000000001302104371500144535ustar00rootroot00000000000000vision_opencv-1.12.3/.gitignore000066400000000000000000000000301302104371500164340ustar00rootroot00000000000000build*/ ._* *.pyc .* *~ vision_opencv-1.12.3/.travis.sh000077500000000000000000000047041302104371500164050ustar00rootroot00000000000000#!/bin/bash set -e function travis_time_start { set +x TRAVIS_START_TIME=$(date +%s%N) TRAVIS_TIME_ID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) TRAVIS_FOLD_NAME=$1 echo -e "\e[0Ktraivs_fold:start:$TRAVIS_FOLD_NAME" echo -e "\e[0Ktraivs_time:start:$TRAVIS_TIME_ID" set -x } function travis_time_end { set +x _COLOR=${1:-32} TRAVIS_END_TIME=$(date +%s%N) TIME_ELAPSED_SECONDS=$(( ($TRAVIS_END_TIME - $TRAVIS_START_TIME)/1000000000 )) echo -e "traivs_time:end:$TRAVIS_TIME_ID:start=$TRAVIS_START_TIME,finish=$TRAVIS_END_TIME,duration=$(($TRAVIS_END_TIME - $TRAVIS_START_TIME))\n\e[0K" echo -e "traivs_fold:end:$TRAVIS_FOLD_NAME" echo -e "\e[0K\e[${_COLOR}mFunction $TRAVIS_FOLD_NAME takes $(( $TIME_ELAPSED_SECONDS / 60 )) min $(( $TIME_ELAPSED_SECONDS % 60 )) sec\e[0m" set -x } apt-get update -qq && apt-get install -qq -y -q wget sudo lsb-release # for docker travis_time_start setup.before_install #before_install: # Define some config vars. # Install ROS sudo sh -c "echo \"deb http://packages.ros.org/ros-shadow-fixed/ubuntu `lsb_release -cs` main\" > /etc/apt/sources.list.d/ros-latest.list" wget http://packages.ros.org/ros.key -O - | sudo apt-key add - sudo apt-get update -qq # Install ROS sudo apt-get install -qq -y python-catkin-pkg python-catkin-tools python-rosdep python-wstool ros-$ROS_DISTRO-catkin source /opt/ros/$ROS_DISTRO/setup.bash # Setup for rosdep sudo rosdep init rosdep update travis_time_end travis_time_start setup.install # Create a catkin workspace with the package under test. #install: mkdir -p ~/catkin_ws/src # Add the package under test to the workspace. cd ~/catkin_ws/src ln -s $CI_SOURCE_PATH . # Link the repo we are testing to the new workspace travis_time_end travis_time_start setup.before_script # Install all dependencies, using wstool and rosdep. # wstool looks for a ROSINSTALL_FILE defined in before_install. #before_script: # source dependencies: install using wstool. cd ~/catkin_ws/src wstool init wstool up # package depdencies: install using rosdep. cd ~/catkin_ws rosdep install -q -y -r --from-paths src --ignore-src --rosdistro $ROS_DISTRO travis_time_end travis_time_start setup.script # Compile and test. #script: source /opt/ros/$ROS_DISTRO/setup.bash cd ~/catkin_ws catkin build -p1 -j1 --no-status catkin run_tests -p1 -j1 catkin_test_results --all build catkin clean -b --yes catkin config --install catkin build -p1 -j1 --no-status travis_time_end vision_opencv-1.12.3/.travis.yml000066400000000000000000000013731302104371500165700ustar00rootroot00000000000000sudo: required dist: trusty language: generic env: - ROS_DISTRO=kinetic # Install system dependencies, namely ROS. before_install: # Define some config vars. - export CI_SOURCE_PATH=$(pwd) - export REPOSITORY_NAME=${PWD##*/} - export ROS_PARALLEL_JOBS='-j8 -l6' script: - echo "Testing branch $TRAVIS_BRANCH of $REPOSITORY_NAME" - docker run --rm -i -v $CI_SOURCE_PATH:$CI_SOURCE_PATH -e "CI_SOURCE_PATH=$CI_SOURCE_PATH" -e "HOME=$HOME" -e "ROS_DISTRO=$ROS_DISTRO" -e "ROS_PARALLEL_JOBS=$ROS_PARALLEL_JOBS" -t ubuntu:xenial sh -c "cd $CI_SOURCE_PATH; ./.travis.sh" after_failure: - find ${HOME}/.ros/test_results -type f -exec echo "== {} ==" \; -exec cat {} \; - for file in ${HOME}/.ros/log/rostest-*; do echo "=== $file ==="; cat $file; done vision_opencv-1.12.3/README.rst000066400000000000000000000004321302104371500161410ustar00rootroot00000000000000vision_opencv ============= .. image:: https://travis-ci.org/ros-perception/vision_opencv.svg?branch=indigo :target: https://travis-ci.org/ros-perception/vision_opencv Packages for interfacing ROS with OpenCV, a library of programming functions for real time computer vision. vision_opencv-1.12.3/cv_bridge/000077500000000000000000000000001302104371500163775ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/CHANGELOG.rst000066400000000000000000000320151302104371500204210ustar00rootroot00000000000000^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Changelog for package cv_bridge ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1.12.3 (2016-12-04) ------------------- * Use api in sensor_msgs to get byte_depth and num channels * Implement cpp conversion of N channel image This is cpp version of https://github.com/ros-perception/vision_opencv/pull/141, which is one for python. * Fill black color to depth nan region * address gcc6 build error in cv_bridge and tune With gcc6, compiling fails with `stdlib.h: No such file or directory`, as including '-isystem /usr/include' breaks with gcc6, cf., https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129 This commit addresses this issue for cv_bridge in the same way it was done in the commit ead421b8 [1] for image_geometry. This issue was also addressed in various other ROS packages. A list of related commits and pull requests is at: https://github.com/ros/rosdistro/issues/12783 [1] https://github.com/ros-perception/vision_opencv/commit/ead421b85eeb750cbf7988657015296ed6789bcf Signed-off-by: Lukas Bulwahn * cv_bridge: Add missing test_depend on numpy * Contributors: Kentaro Wada, Lukas Bulwahn, Maarten de Vries 1.12.2 (2016-09-24) ------------------- * Specify background label when colorizing label image * Adjust to arbitrary image channels like 32FC40 Proper fix for `#141 `_ * Remove unexpectedly included print statement * Contributors: Kentaro Wada, Vincent Rabaud 1.12.1 (2016-07-11) ------------------- * split the conversion tests out of enumerants * support is_bigendian in Python Fixes `#114 `_ Also fixes mono16 test * Support compressed Images messages in python for indigo - Add cv2_to_comprssed_imgmsg: Convert from cv2 image to compressed image ros msg. - Add comprssed_imgmsg_to_cv2: Convert the compress message to a new image. - Add compressed image tests. - Add time to msgs (compressed and regular). add enumerants test for compressed image. merge the compressed tests with the regular ones. better comment explanation. I will squash this commit. Fix indentation fix typo mistage: from .imgmsg_to_compressed_cv2 to .compressed_imgmsg_to_cv2. remove cv2.CV_8UC1 remove rospy and time depndency. change from IMREAD_COLOR to IMREAD_ANYCOLOR. - make indentaion of 4. - remove space trailer. - remove space from empty lines. - another set of for loops, it will make things easier to track. In that new set, just have the number of channels in ([],1,3,4) (ignore two for jpg). from: https://github.com/ros-perception/vision_opencv/pull/132#discussion_r66721943 - keep the OpenCV error message. from: https://github.com/ros-perception/vision_opencv/pull/132#discussion_r66721013 add debug print for test. add case for 4 channels in test. remove 4 channels case from compressed test. add debug print for test. change typo of format. fix typo in format. change from dip to dib. change to IMREAD_ANYCOLOR as python code. (as it should). rename TIFF to tiff Sperate the tests one for regular images and one for compressed. update comment * Add CvtColorForDisplayOptions with new colormap param * fix doc jobs * Add python binding for cv_bridge::cvtColorForDisplay * Don't colorize float image as label image This is a bug and image whose encoding is other than 32SC1 should not be colorized. (currently, depth images with 32FC1 is also colorized.) * Fix compilation of cv_bridge with opencv3 and python3. * Contributors: Kentaro Wada, Maarten de Vries, Vincent Rabaud, talregev 1.12.0 (2016-03-18) ------------------- * depend on OpenCV3 only * Contributors: Vincent Rabaud 1.11.12 (2016-03-10) -------------------- * Fix my typo * Remove another eval Because `cvtype2_to_dtype_with_channels('8UCimport os; os.system("rm -rf /")')` should never have a chance of happening. * Remove eval, and other fixes Also, extend from object, so as not to get a python 2.2-style class, and use the new-style raise statement * Contributors: Eric Wieser 1.11.11 (2016-01-31) -------------------- * clean up the doc files * fix a few warnings in doc jobs * Contributors: Vincent Rabaud 1.11.10 (2016-01-16) -------------------- * fix OpenCV3 build * Describe about converting label to bgr image in cvtColorForDisplay * Convert label to BGR image to display * Add test for rgb_colors.cpp * Add rgb_colors util * Update doc for converting to BGR in cvtColorForDisplay * Convert to BGR from any encoding * Refactor: sensor_msgs::image_encodings -> enc * Contributors: Kentaro Wada, Vincent Rabaud 1.11.9 (2015-11-29) ------------------- * deal with endianness * add cvtColorForDisplay * Improved efficiency by using toCvShare instead of toCvCopy. * Add format enum for easy use and choose format. * fix compilation warnings * start to extend the cv_bridge with cvCompressedImage class, that will convert from cv::Mat opencv images to CompressedImage ros messages and vice versa * Contributors: Carlos Costa, Vincent Rabaud, talregev 1.11.8 (2015-07-15) ------------------- * Simplify some OpenCV3 distinction * fix tests * fix test under OpenCV3 * Remove Python for Android * Contributors: Gary Servin, Vincent Rabaud 1.11.7 (2014-12-14) ------------------- * check that the type is indeed a Numpy one This is in response to `#51 `_ * Contributors: Vincent Rabaud 1.11.6 (2014-11-16) ------------------- * chnage the behavior when there is only one channel * cleanup tests * Contributors: Vincent Rabaud 1.11.5 (2014-09-21) ------------------- * get code to work with OpenCV3 actually fixes `#46 `_ properly * Contributors: Vincent Rabaud 1.11.4 (2014-07-27) ------------------- * Fix `#42 `_ * Contributors: Libor Wagner 1.11.3 (2014-06-08) ------------------- * Correct dependency from non-existent package to cv_bridge * Contributors: Isaac Isao Saito 1.11.2 (2014-04-28) ------------------- * Add depend on python for cv_bridge * Contributors: Scott K Logan 1.11.1 (2014-04-16) ------------------- * fixes `#34 `_ * Contributors: Vincent Rabaud 1.11.0 (2014-02-15) ------------------- * remove deprecated API and fixes `#33 `_ * fix OpenCV dependencies * Contributors: Vincent Rabaud 1.10.15 (2014-02-07) -------------------- * fix python 3 error at configure time * Contributors: Dirk Thomas 1.10.14 (2013-11-23 16:17) -------------------------- * update changelog * Find NumPy include directory * Contributors: Brian Jensen, Vincent Rabaud 1.10.13 (2013-11-23 09:19) -------------------------- * fix compilation on older NumPy * Contributors: Vincent Rabaud 1.10.12 (2013-11-22) -------------------- * bump changelog * Fixed issue with image message step size * fix crash for non char data * fix `#26 `_ * Contributors: Brian Jensen, Vincent Rabaud 1.10.11 (2013-10-23) -------------------- * fix bad image check and improve it too * Contributors: Vincent Rabaud 1.10.10 (2013-10-19) -------------------- * fixes `#25 `_ * Contributors: Vincent Rabaud 1.10.9 (2013-10-07) ------------------- * fixes `#20 `_ * Contributors: Vincent Rabaud 1.10.8 (2013-09-09) ------------------- * fixes `#22 `_ * fixes `#17 `_ * check for CATKIN_ENABLE_TESTING * fixes `#16 `_ * update email address * Contributors: Lukas Bulwahn, Vincent Rabaud 1.10.7 (2013-07-17) ------------------- 1.10.6 (2013-03-01) ------------------- * make sure conversion are applied for depth differences * Contributors: Vincent Rabaud 1.10.5 (2013-02-11) ------------------- 1.10.4 (2013-02-02) ------------------- * fix installation of the boost package * Contributors: Vincent Rabaud 1.10.3 (2013-01-17) ------------------- * Link against PTYHON_LIBRARIES * Contributors: William Woodall 1.10.2 (2013-01-13) ------------------- * use CATKIN_DEVEL_PREFIX instead of obsolete CATKIN_BUILD_PREFIX * Contributors: Dirk Thomas 1.10.1 (2013-01-10) ------------------- * add licenses * fixes `#5 `_ by removing the logic from Python and using wrapped C++ and adding a test for it * fix a bug discovered when running the opencv_tests * use some C++ logic * add a Boost Python module to have the C++ logix used directly in Python * Contributors: Vincent Rabaud 1.10.0 (2013-01-03) ------------------- * add conversion from Bayer to gray * Contributors: Vincent Rabaud 1.9.15 (2013-01-02) ------------------- * use the reverted isColor behavior * Contributors: Vincent Rabaud 1.9.14 (2012-12-30) ------------------- 1.9.13 (2012-12-15) ------------------- * use the catkin macros for the setup.py * fix `#3 `_ * Contributors: Vincent Rabaud 1.9.12 (2012-12-14) ------------------- * buildtool_depend catkin fix * CMakeLists.txt clean up. * Contributors: William Woodall 1.9.11 (2012-12-10) ------------------- * fix issue `#1 `_ * Cleanup of package.xml * Contributors: Vincent Rabaud, William Woodall 1.9.10 (2012-10-04) ------------------- * fix the bad include folder * Contributors: Vincent Rabaud 1.9.9 (2012-10-01) ------------------ * fix dependencies * Contributors: Vincent Rabaud 1.9.8 (2012-09-30) ------------------ * fix some dependencies * add rosconsole as a dependency * fix missing Python at install and fix some dependencies * Contributors: Vincent Rabaud 1.9.7 (2012-09-28 21:07) ------------------------ * add missing stuff * make sure we find catkin * Contributors: Vincent Rabaud 1.9.6 (2012-09-28 15:17) ------------------------ * move the test to where it belongs * fix the tests and the API to not handle conversion from CV_TYPE to Color type (does not make sense) * comply to the new Catkin API * backport the YUV422 bug fix from Fuerte * apply patch from https://code.ros.org/trac/ros-pkg/ticket/5556 * Contributors: Vincent Rabaud 1.9.5 (2012-09-15) ------------------ * remove dependencies to the opencv2 ROS package * Contributors: Vincent Rabaud 1.9.4 (2012-09-13) ------------------ * make sure the include folders are copied to the right place * Contributors: Vincent Rabaud 1.9.3 (2012-09-12) ------------------ 1.9.2 (2012-09-07) ------------------ * be more compliant to the latest catkin * added catkin_project() to cv_bridge, image_geometry, and opencv_tests * Contributors: Jonathan Binney, Vincent Rabaud 1.9.1 (2012-08-28 22:06) ------------------------ * remove things that were marked as ROS_DEPRECATED * Contributors: Vincent Rabaud 1.9.0 (2012-08-28 14:29) ------------------------ * catkinized opencv_tests by Jon Binney * catkinized cv_bridge package... others disable for now by Jon Binney * remove the version check, let's trust OpenCV :) * revert the removal of opencv2 * vision_opencv: Export OpenCV flags in manifests for image_geometry, cv_bridge. * finally get rid of opencv2 as it is a system dependency now * bump REQUIRED version of OpenCV to 2.3.2, which is what's in ros-fuerte-opencv * switch rosdep name to opencv2, to refer to ros-fuerte-opencv2 * added missing header * Added constructor to CvImage to make converting a cv::Mat to sensor_msgs::Image less verbose. * cv_bridge: Added unit test for `#5206 `_ * cv_bridge: Applied patch from mdesnoyer to fix handling of non-continuous OpenCV images. `#5206 `_ * Adding opencv2 to all manifests, so that client packages may not break when using them. * baking in opencv debs and attempting a pre-release * cv_bridge: Support for new 16-bit encodings. * cv_bridge: Deprecate old C++ cv_bridge API. * cv_bridge: Correctly scale for MONO8 <-> MONO16 conversions. * cv_bridge: Fixed issue where pointer version to toCvCopy would ignore the requested encoding (http://answers.ros.org/question/258/converting-kinect-rgb-image-to-opencv-gives-wrong). * fixed doc build by taking a static snapshot * cv_bridge: Marking doc reviewed. * cv_bridge: Tweaks to make docs look better. * cv_bridge: Added cvtColor(). License notices. Documented that CvBridge class is obsolete. * cv_bridge: Added redesigned C++ cv_bridge. * Doc cleanup * Trigger doc rebuild * mono16 -> bgr conversion tested and fixed in C * Added Ubuntu platform tags to manifest * Handle mono16 properly * Raise exception when imgMsgToCv() gets an image encoding it does not recognise, `#3489 `_ * Remove use of deprecated rosbuild macros * Fixed example * cv_bridge split from opencv2 * Contributors: Vincent Rabaud, ethanrublee, gerkey, jamesb, mihelich, vrabaud, wheeler vision_opencv-1.12.3/cv_bridge/CMakeLists.txt000066400000000000000000000013641302104371500211430ustar00rootroot00000000000000cmake_minimum_required(VERSION 2.8) project(cv_bridge) find_package(catkin REQUIRED COMPONENTS rosconsole sensor_msgs) if(NOT ANDROID) find_package(Boost REQUIRED python) else() find_package(Boost REQUIRED) endif() find_package(OpenCV REQUIRED) catkin_package( INCLUDE_DIRS include LIBRARIES ${PROJECT_NAME} CATKIN_DEPENDS rosconsole sensor_msgs DEPENDS OpenCV ) catkin_python_setup() include_directories(include ${Boost_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ${catkin_INCLUDE_DIRS}) if(NOT ANDROID) add_subdirectory(python) endif() add_subdirectory(src) if(CATKIN_ENABLE_TESTING) add_subdirectory(test) endif() # install the include folder install( DIRECTORY include/${PROJECT_NAME}/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} ) vision_opencv-1.12.3/cv_bridge/doc/000077500000000000000000000000001302104371500171445ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/doc/conf.py000066400000000000000000000147101302104371500204460ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # cv_bridge documentation build configuration file, created by # sphinx-quickstart on Mon Jun 1 14:21:53 2009. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx', 'sphinx.ext.pngmath'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'cv_bridge' copyright = u'2009, Willow Garage, Inc.' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '0.1' # The full version, including alpha/beta/rc tags. release = '0.1.0' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'cv_bridgedoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'cv_bridge.tex', u'stereo\\_utils Documentation', u'James Bowman', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = { 'http://docs.python.org/': None, 'http://docs.scipy.org/doc/numpy' : None, } vision_opencv-1.12.3/cv_bridge/doc/index.rst000066400000000000000000000004751302104371500210130ustar00rootroot00000000000000cv_bridge ========= ``cv_bridge`` contains a single class :class:`CvBridge` that converts ROS Image messages to OpenCV images. .. module:: cv_bridge .. autoclass:: cv_bridge.CvBridge :members: .. autoclass:: cv_bridge.CvBridgeError Indices and tables ================== * :ref:`genindex` * :ref:`search` vision_opencv-1.12.3/cv_bridge/doc/mainpage.dox000066400000000000000000000003531302104371500214420ustar00rootroot00000000000000/** \mainpage \htmlinclude manifest.html \b cv_bridge contains classes for easily converting between ROS sensor_msgs/Image messages and OpenCV images. \section codeapi Code API - cv_bridge::CvImage - toCvCopy() - toCvShare() */ vision_opencv-1.12.3/cv_bridge/include/000077500000000000000000000000001302104371500200225ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/include/cv_bridge/000077500000000000000000000000001302104371500217465ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/include/cv_bridge/cv_bridge.h000066400000000000000000000364371302104371500240600ustar00rootroot00000000000000/********************************************************************* * Software License Agreement (BSD License) * * Copyright (c) 2011, Willow Garage, Inc, * Copyright (c) 2015, Tal Regev. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Willow Garage nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #ifndef CV_BRIDGE_CV_BRIDGE_H #define CV_BRIDGE_CV_BRIDGE_H #include #include #include #include #include #include #include #include namespace cv_bridge { class Exception : public std::runtime_error { public: Exception(const std::string& description) : std::runtime_error(description) {} }; class CvImage; typedef boost::shared_ptr CvImagePtr; typedef boost::shared_ptr CvImageConstPtr; //from: http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#Mat imread(const string& filename, int flags) typedef enum { BMP, DIB, JPG, JPEG, JPE, JP2, PNG, PBM, PGM, PPM, SR, RAS, TIFF, TIF, } Format; /** * \brief Image message class that is interoperable with sensor_msgs/Image but uses a * more convenient cv::Mat representation for the image data. */ class CvImage { public: std_msgs::Header header; //!< ROS header std::string encoding; //!< Image encoding ("mono8", "bgr8", etc.) cv::Mat image; //!< Image data for use with OpenCV /** * \brief Empty constructor. */ CvImage() {} /** * \brief Constructor. */ CvImage(const std_msgs::Header& header, const std::string& encoding, const cv::Mat& image = cv::Mat()) : header(header), encoding(encoding), image(image) { } /** * \brief Convert this message to a ROS sensor_msgs::Image message. * * The returned sensor_msgs::Image message contains a copy of the image data. */ sensor_msgs::ImagePtr toImageMsg() const; /** * dst_format is compress the image to desire format. * Default value is empty string that will convert to jpg format. * can be: jpg, jp2, bmp, png, tif at the moment * support this format from opencv: * http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#Mat imread(const string& filename, int flags) */ sensor_msgs::CompressedImagePtr toCompressedImageMsg(const Format dst_format = JPG) const; /** * \brief Copy the message data to a ROS sensor_msgs::Image message. * * This overload is intended mainly for aggregate messages such as stereo_msgs::DisparityImage, * which contains a sensor_msgs::Image as a data member. */ void toImageMsg(sensor_msgs::Image& ros_image) const; /** * dst_format is compress the image to desire format. * Default value is empty string that will convert to jpg format. * can be: jpg, jp2, bmp, png, tif at the moment * support this format from opencv: * http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#Mat imread(const string& filename, int flags) */ void toCompressedImageMsg(sensor_msgs::CompressedImage& ros_image, const Format dst_format = JPG) const; typedef boost::shared_ptr Ptr; typedef boost::shared_ptr ConstPtr; protected: boost::shared_ptr tracked_object_; // for sharing ownership /// @cond DOXYGEN_IGNORE friend CvImageConstPtr toCvShare(const sensor_msgs::Image& source, const boost::shared_ptr& tracked_object, const std::string& encoding); /// @endcond }; /** * \brief Convert a sensor_msgs::Image message to an OpenCV-compatible CvImage, copying the * image data. * * \param source A shared_ptr to a sensor_msgs::Image message * \param encoding The desired encoding of the image data, one of the following strings: * - \c "mono8" * - \c "bgr8" * - \c "bgra8" * - \c "rgb8" * - \c "rgba8" * - \c "mono16" * * If \a encoding is the empty string (the default), the returned CvImage has the same encoding * as \a source. */ CvImagePtr toCvCopy(const sensor_msgs::ImageConstPtr& source, const std::string& encoding = std::string()); CvImagePtr toCvCopy(const sensor_msgs::CompressedImageConstPtr& source, const std::string& encoding = std::string()); /** * \brief Convert a sensor_msgs::Image message to an OpenCV-compatible CvImage, copying the * image data. * * \param source A sensor_msgs::Image message * \param encoding The desired encoding of the image data, one of the following strings: * - \c "mono8" * - \c "bgr8" * - \c "bgra8" * - \c "rgb8" * - \c "rgba8" * - \c "mono16" * * If \a encoding is the empty string (the default), the returned CvImage has the same encoding * as \a source. * If the source is 8bit and the encoding 16 or vice-versa, a scaling is applied (65535/255 and * 255/65535 respectively). Otherwise, no scaling is applied and the rules from the convertTo OpenCV * function are applied (capping): http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-convertto */ CvImagePtr toCvCopy(const sensor_msgs::Image& source, const std::string& encoding = std::string()); CvImagePtr toCvCopy(const sensor_msgs::CompressedImage& source, const std::string& encoding = std::string()); /** * \brief Convert an immutable sensor_msgs::Image message to an OpenCV-compatible CvImage, sharing * the image data if possible. * * If the source encoding and desired encoding are the same, the returned CvImage will share * the image data with \a source without copying it. The returned CvImage cannot be modified, as that * could modify the \a source data. * * \param source A shared_ptr to a sensor_msgs::Image message * \param encoding The desired encoding of the image data, one of the following strings: * - \c "mono8" * - \c "bgr8" * - \c "bgra8" * - \c "rgb8" * - \c "rgba8" * - \c "mono16" * * If \a encoding is the empty string (the default), the returned CvImage has the same encoding * as \a source. */ CvImageConstPtr toCvShare(const sensor_msgs::ImageConstPtr& source, const std::string& encoding = std::string()); /** * \brief Convert an immutable sensor_msgs::Image message to an OpenCV-compatible CvImage, sharing * the image data if possible. * * If the source encoding and desired encoding are the same, the returned CvImage will share * the image data with \a source without copying it. The returned CvImage cannot be modified, as that * could modify the \a source data. * * This overload is useful when you have a shared_ptr to a message that contains a * sensor_msgs::Image, and wish to share ownership with the containing message. * * \param source The sensor_msgs::Image message * \param tracked_object A shared_ptr to an object owning the sensor_msgs::Image * \param encoding The desired encoding of the image data, one of the following strings: * - \c "mono8" * - \c "bgr8" * - \c "bgra8" * - \c "rgb8" * - \c "rgba8" * - \c "mono16" * * If \a encoding is the empty string (the default), the returned CvImage has the same encoding * as \a source. */ CvImageConstPtr toCvShare(const sensor_msgs::Image& source, const boost::shared_ptr& tracked_object, const std::string& encoding = std::string()); /** * \brief Convert a CvImage to another encoding using the same rules as toCvCopy */ CvImagePtr cvtColor(const CvImageConstPtr& source, const std::string& encoding); struct CvtColorForDisplayOptions { CvtColorForDisplayOptions() : do_dynamic_scaling(false), min_image_value(0.0), max_image_value(0.0), colormap(-1), bg_label(-1) {} bool do_dynamic_scaling; double min_image_value; double max_image_value; int colormap; int bg_label; }; /** * \brief Converts an immutable sensor_msgs::Image message to another CvImage for display purposes, * using practical conversion rules if needed. * * Data will be shared between input and output if possible. * * Recall: sensor_msgs::image_encodings::isColor and isMono tell whether an image contains R,G,B,A, mono * (or any combination/subset) with 8 or 16 bit depth. * * The following rules apply: * - if the output encoding is empty, the fact that the input image is mono or multiple-channel is * preserved in the ouput image. The bit depth will be 8. it tries to convert to BGR no matter what * encoding image is passed. * - if the output encoding is not empty, it must have sensor_msgs::image_encodings::isColor and * isMono return true. It must also be 8 bit in depth * - if the input encoding is an OpenCV format (e.g. 8UC1), and if we have 1,3 or 4 channels, it is * respectively converted to mono, BGR or BGRA. * - if the input encoding is 32SC1, this estimate that image as label image and will convert it as * bgr image with different colors for each label. * * \param source A shared_ptr to a sensor_msgs::Image message * \param encoding Either an encoding string that returns true in sensor_msgs::image_encodings::isColor * isMono or the empty string as explained above. * \param options (cv_bridge::CvtColorForDisplayOptions) Options to convert the source image with. * - do_dynamic_scaling If true, the image is dynamically scaled between its minimum and maximum value * before being converted to its final encoding. * - min_image_value Independently from do_dynamic_scaling, if min_image_value and max_image_value are * different, the image is scaled between these two values before being converted to its final encoding. * - max_image_value Maximum image value * - colormap Colormap which the source image converted with. */ CvImageConstPtr cvtColorForDisplay(const CvImageConstPtr& source, const std::string& encoding = std::string(), const CvtColorForDisplayOptions options = CvtColorForDisplayOptions()); /** * \brief Get the OpenCV type enum corresponding to the encoding. * * For example, "bgr8" -> CV_8UC3, "32FC1" -> CV_32FC1, and "32FC10" -> CV_32FC10. */ int getCvType(const std::string& encoding); } // namespace cv_bridge // CvImage as a first class message type // The rest of this file hooks into the roscpp serialization API to make CvImage // a first-class message type you can publish and subscribe to directly. // Unfortunately this doesn't yet work with image_transport, so don't rewrite all // your callbacks to use CvImage! It might be useful for specific tasks, like // processing bag files. /// @cond DOXYGEN_IGNORE namespace ros { namespace message_traits { template<> struct MD5Sum { static const char* value() { return MD5Sum::value(); } static const char* value(const cv_bridge::CvImage&) { return value(); } static const uint64_t static_value1 = MD5Sum::static_value1; static const uint64_t static_value2 = MD5Sum::static_value2; // If the definition of sensor_msgs/Image changes, we'll get a compile error here. ROS_STATIC_ASSERT(MD5Sum::static_value1 == 0x060021388200f6f0ULL); ROS_STATIC_ASSERT(MD5Sum::static_value2 == 0xf447d0fcd9c64743ULL); }; template<> struct DataType { static const char* value() { return DataType::value(); } static const char* value(const cv_bridge::CvImage&) { return value(); } }; template<> struct Definition { static const char* value() { return Definition::value(); } static const char* value(const cv_bridge::CvImage&) { return value(); } }; template<> struct HasHeader : TrueType {}; } // namespace ros::message_traits namespace serialization { template<> struct Serializer { /// @todo Still ignoring endianness... template inline static void write(Stream& stream, const cv_bridge::CvImage& m) { stream.next(m.header); stream.next((uint32_t)m.image.rows); // height stream.next((uint32_t)m.image.cols); // width stream.next(m.encoding); uint8_t is_bigendian = 0; stream.next(is_bigendian); stream.next((uint32_t)m.image.step); size_t data_size = m.image.step*m.image.rows; stream.next((uint32_t)data_size); if (data_size > 0) memcpy(stream.advance(data_size), m.image.data, data_size); } template inline static void read(Stream& stream, cv_bridge::CvImage& m) { stream.next(m.header); uint32_t height, width; stream.next(height); stream.next(width); stream.next(m.encoding); uint8_t is_bigendian; stream.next(is_bigendian); uint32_t step, data_size; stream.next(step); stream.next(data_size); int type = cv_bridge::getCvType(m.encoding); // Construct matrix pointing to the stream data, then copy it to m.image cv::Mat tmp((int)height, (int)width, type, stream.advance(data_size), (size_t)step); tmp.copyTo(m.image); } inline static uint32_t serializedLength(const cv_bridge::CvImage& m) { size_t data_size = m.image.step*m.image.rows; return serializationLength(m.header) + serializationLength(m.encoding) + 17 + data_size; } }; } // namespace ros::serialization namespace message_operations { template<> struct Printer { template static void stream(Stream& s, const std::string& indent, const cv_bridge::CvImage& m) { /// @todo Replicate printing for sensor_msgs::Image } }; } // namespace ros::message_operations } // namespace ros namespace cv_bridge { inline std::ostream& operator<<(std::ostream& s, const CvImage& m) { ros::message_operations::Printer::stream(s, "", m); return s; } } // namespace cv_bridge /// @endcond #endif vision_opencv-1.12.3/cv_bridge/include/cv_bridge/rgb_colors.h000066400000000000000000000104071302104371500242540ustar00rootroot00000000000000// -*- mode: c++ -*- /********************************************************************* * Original color definition is at scikit-image distributed with * following license disclaimer: * * Copyright (C) 2011, the scikit-image team * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * 3. Neither the name of skimage nor the names of its contributors may be * used to endorse or promote products derived from this software without * specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #ifndef CV_BRIDGE_RGB_COLORS_H_ #define CV_BRIDGE_RGB_COLORS_H_ #include namespace cv_bridge { namespace rgb_colors { /** * @brief * 146 rgb colors */ enum Colors { ALICEBLUE, ANTIQUEWHITE, AQUA, AQUAMARINE, AZURE, BEIGE, BISQUE, BLACK, BLANCHEDALMOND, BLUE, BLUEVIOLET, BROWN, BURLYWOOD, CADETBLUE, CHARTREUSE, CHOCOLATE, CORAL, CORNFLOWERBLUE, CORNSILK, CRIMSON, CYAN, DARKBLUE, DARKCYAN, DARKGOLDENROD, DARKGRAY, DARKGREEN, DARKGREY, DARKKHAKI, DARKMAGENTA, DARKOLIVEGREEN, DARKORANGE, DARKORCHID, DARKRED, DARKSALMON, DARKSEAGREEN, DARKSLATEBLUE, DARKSLATEGRAY, DARKSLATEGREY, DARKTURQUOISE, DARKVIOLET, DEEPPINK, DEEPSKYBLUE, DIMGRAY, DIMGREY, DODGERBLUE, FIREBRICK, FLORALWHITE, FORESTGREEN, FUCHSIA, GAINSBORO, GHOSTWHITE, GOLD, GOLDENROD, GRAY, GREEN, GREENYELLOW, GREY, HONEYDEW, HOTPINK, INDIANRED, INDIGO, IVORY, KHAKI, LAVENDER, LAVENDERBLUSH, LAWNGREEN, LEMONCHIFFON, LIGHTBLUE, LIGHTCORAL, LIGHTCYAN, LIGHTGOLDENRODYELLOW, LIGHTGRAY, LIGHTGREEN, LIGHTGREY, LIGHTPINK, LIGHTSALMON, LIGHTSEAGREEN, LIGHTSKYBLUE, LIGHTSLATEGRAY, LIGHTSLATEGREY, LIGHTSTEELBLUE, LIGHTYELLOW, LIME, LIMEGREEN, LINEN, MAGENTA, MAROON, MEDIUMAQUAMARINE, MEDIUMBLUE, MEDIUMORCHID, MEDIUMPURPLE, MEDIUMSEAGREEN, MEDIUMSLATEBLUE, MEDIUMSPRINGGREEN, MEDIUMTURQUOISE, MEDIUMVIOLETRED, MIDNIGHTBLUE, MINTCREAM, MISTYROSE, MOCCASIN, NAVAJOWHITE, NAVY, OLDLACE, OLIVE, OLIVEDRAB, ORANGE, ORANGERED, ORCHID, PALEGOLDENROD, PALEGREEN, PALEVIOLETRED, PAPAYAWHIP, PEACHPUFF, PERU, PINK, PLUM, POWDERBLUE, PURPLE, RED, ROSYBROWN, ROYALBLUE, SADDLEBROWN, SALMON, SANDYBROWN, SEAGREEN, SEASHELL, SIENNA, SILVER, SKYBLUE, SLATEBLUE, SLATEGRAY, SLATEGREY, SNOW, SPRINGGREEN, STEELBLUE, TAN, TEAL, THISTLE, TOMATO, TURQUOISE, VIOLET, WHEAT, WHITE, WHITESMOKE, YELLOW, YELLOWGREEN, }; /** * @brief * get rgb color with enum. */ cv::Vec3d getRGBColor(const int color); } // namespace rgb_colors } // namespace cv_bridge #endif vision_opencv-1.12.3/cv_bridge/package.xml000066400000000000000000000023471302104371500205220ustar00rootroot00000000000000 cv_bridge 1.12.3 This contains CvBridge, which converts between ROS Image messages and OpenCV images. Patrick Mihelich James Bowman Vincent Rabaud BSD http://www.ros.org/wiki/cv_bridge https://github.com/ros-perception/vision_opencv https://github.com/ros-perception/vision_opencv/issues catkin boost opencv3 python rosconsole sensor_msgs boost opencv3 python rosconsole sensor_msgs rostest python-numpy dvipng vision_opencv-1.12.3/cv_bridge/python/000077500000000000000000000000001302104371500177205ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/python/CMakeLists.txt000066400000000000000000000004671302104371500224670ustar00rootroot00000000000000configure_file(__init__.py.plain.in ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_PYTHON_DESTINATION}/boost/__init__.py @ONLY ) install(FILES ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_PYTHON_DESTINATION}/boost/__init__.py DESTINATION ${CATKIN_PACKAGE_PYTHON_DESTINATION}/boost/ ) vision_opencv-1.12.3/cv_bridge/python/__init__.py.plain.in000066400000000000000000000000001302104371500235260ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/python/cv_bridge/000077500000000000000000000000001302104371500216445ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/python/cv_bridge/__init__.py000066400000000000000000000003641302104371500237600ustar00rootroot00000000000000from .core import CvBridge, CvBridgeError # python bindings try: # This try is just to satisfy doc jobs that are built differently. from cv_bridge.boost.cv_bridge_boost import cvtColorForDisplay, getCvType except ImportError: pass vision_opencv-1.12.3/cv_bridge/python/cv_bridge/core.py000066400000000000000000000263101302104371500231500ustar00rootroot00000000000000# Software License Agreement (BSD License) # # Copyright (c) 2011, Willow Garage, Inc. # Copyright (c) 2016, Tal Regev. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials provided # with the distribution. # * Neither the name of Willow Garage, Inc. nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. import sensor_msgs.msg import sys class CvBridgeError(TypeError): """ This is the error raised by :class:`cv_bridge.CvBridge` methods when they fail. """ pass class CvBridge(object): """ The CvBridge is an object that converts between OpenCV Images and ROS Image messages. .. doctest:: :options: -ELLIPSIS, +NORMALIZE_WHITESPACE >>> import cv2 >>> import numpy as np >>> from cv_bridge import CvBridge >>> br = CvBridge() >>> dtype, n_channels = br.encoding_as_cvtype2('8UC3') >>> im = np.ndarray(shape=(480, 640, n_channels), dtype=dtype) >>> msg = br.cv2_to_imgmsg(im) # Convert the image to a message >>> im2 = br.imgmsg_to_cv2(msg) # Convert the message to a new image >>> cmprsmsg = br.cv2_to_compressed_imgmsg(im) # Convert the image to a compress message >>> im22 = br.compressed_imgmsg_to_cv2(msg) # Convert the compress message to a new image >>> cv2.imwrite("this_was_a_message_briefly.png", im2) """ def __init__(self): import cv2 self.cvtype_to_name = {} self.cvdepth_to_numpy_depth = {cv2.CV_8U: 'uint8', cv2.CV_8S: 'int8', cv2.CV_16U: 'uint16', cv2.CV_16S: 'int16', cv2.CV_32S:'int32', cv2.CV_32F:'float32', cv2.CV_64F: 'float64'} for t in ["8U", "8S", "16U", "16S", "32S", "32F", "64F"]: for c in [1, 2, 3, 4]: nm = "%sC%d" % (t, c) self.cvtype_to_name[getattr(cv2, "CV_%s" % nm)] = nm self.numpy_type_to_cvtype = {'uint8': '8U', 'int8': '8S', 'uint16': '16U', 'int16': '16S', 'int32': '32S', 'float32': '32F', 'float64': '64F'} self.numpy_type_to_cvtype.update(dict((v, k) for (k, v) in self.numpy_type_to_cvtype.items())) def dtype_with_channels_to_cvtype2(self, dtype, n_channels): return '%sC%d' % (self.numpy_type_to_cvtype[dtype.name], n_channels) def cvtype2_to_dtype_with_channels(self, cvtype): from cv_bridge.boost.cv_bridge_boost import CV_MAT_CNWrap, CV_MAT_DEPTHWrap return self.cvdepth_to_numpy_depth[CV_MAT_DEPTHWrap(cvtype)], CV_MAT_CNWrap(cvtype) def encoding_to_cvtype2(self, encoding): from cv_bridge.boost.cv_bridge_boost import getCvType try: return getCvType(encoding) except RuntimeError as e: raise CvBridgeError(e) def encoding_to_dtype_with_channels(self, encoding): return self.cvtype2_to_dtype_with_channels(self.encoding_to_cvtype2(encoding)) def compressed_imgmsg_to_cv2(self, cmprs_img_msg, desired_encoding = "passthrough"): """ Convert a sensor_msgs::CompressedImage message to an OpenCV :cpp:type:`cv::Mat`. :param cmprs_img_msg: A :cpp:type:`sensor_msgs::CompressedImage` message :param desired_encoding: The encoding of the image data, one of the following strings: * ``"passthrough"`` * one of the standard strings in sensor_msgs/image_encodings.h :rtype: :cpp:type:`cv::Mat` :raises CvBridgeError: when conversion is not possible. If desired_encoding is ``"passthrough"``, then the returned image has the same format as img_msg. Otherwise desired_encoding must be one of the standard image encodings This function returns an OpenCV :cpp:type:`cv::Mat` message on success, or raises :exc:`cv_bridge.CvBridgeError` on failure. If the image only has one channel, the shape has size 2 (width and height) """ import cv2 import numpy as np str_msg = cmprs_img_msg.data buf = np.ndarray(shape=(1, len(str_msg)), dtype=np.uint8, buffer=cmprs_img_msg.data) im = cv2.imdecode(buf, cv2.IMREAD_ANYCOLOR) if desired_encoding == "passthrough": return im from cv_bridge.boost.cv_bridge_boost import cvtColor2 try: res = cvtColor2(im, "bgr8", desired_encoding) except RuntimeError as e: raise CvBridgeError(e) return res def imgmsg_to_cv2(self, img_msg, desired_encoding = "passthrough"): """ Convert a sensor_msgs::Image message to an OpenCV :cpp:type:`cv::Mat`. :param img_msg: A :cpp:type:`sensor_msgs::Image` message :param desired_encoding: The encoding of the image data, one of the following strings: * ``"passthrough"`` * one of the standard strings in sensor_msgs/image_encodings.h :rtype: :cpp:type:`cv::Mat` :raises CvBridgeError: when conversion is not possible. If desired_encoding is ``"passthrough"``, then the returned image has the same format as img_msg. Otherwise desired_encoding must be one of the standard image encodings This function returns an OpenCV :cpp:type:`cv::Mat` message on success, or raises :exc:`cv_bridge.CvBridgeError` on failure. If the image only has one channel, the shape has size 2 (width and height) """ import cv2 import numpy as np dtype, n_channels = self.encoding_to_dtype_with_channels(img_msg.encoding) dtype = np.dtype(dtype) dtype = dtype.newbyteorder('>' if img_msg.is_bigendian else '<') if n_channels == 1: im = np.ndarray(shape=(img_msg.height, img_msg.width), dtype=dtype, buffer=img_msg.data) else: im = np.ndarray(shape=(img_msg.height, img_msg.width, n_channels), dtype=dtype, buffer=img_msg.data) # If the byt order is different between the message and the system. if img_msg.is_bigendian == (sys.byteorder == 'little'): im = im.byteswap().newbyteorder() if desired_encoding == "passthrough": return im from cv_bridge.boost.cv_bridge_boost import cvtColor2 try: res = cvtColor2(im, img_msg.encoding, desired_encoding) except RuntimeError as e: raise CvBridgeError(e) return res def cv2_to_compressed_imgmsg(self, cvim, dst_format = "jpg"): """ Convert an OpenCV :cpp:type:`cv::Mat` type to a ROS sensor_msgs::CompressedImage message. :param cvim: An OpenCV :cpp:type:`cv::Mat` :param dst_format: The format of the image data, one of the following strings: * from http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html * from http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#Mat imread(const string& filename, int flags) * bmp, dib * jpeg, jpg, jpe * jp2 * png * pbm, pgm, ppm * sr, ras * tiff, tif :rtype: A sensor_msgs.msg.CompressedImage message :raises CvBridgeError: when the ``cvim`` has a type that is incompatible with ``format`` This function returns a sensor_msgs::Image message on success, or raises :exc:`cv_bridge.CvBridgeError` on failure. """ import cv2 import numpy as np if not isinstance(cvim, (np.ndarray, np.generic)): raise TypeError('Your input type is not a numpy array') cmprs_img_msg = sensor_msgs.msg.CompressedImage() cmprs_img_msg.format = dst_format ext_format = '.' + dst_format try: cmprs_img_msg.data = np.array(cv2.imencode(ext_format, cvim)[1]).tostring() except RuntimeError as e: raise CvBridgeError(e) return cmprs_img_msg def cv2_to_imgmsg(self, cvim, encoding = "passthrough"): """ Convert an OpenCV :cpp:type:`cv::Mat` type to a ROS sensor_msgs::Image message. :param cvim: An OpenCV :cpp:type:`cv::Mat` :param encoding: The encoding of the image data, one of the following strings: * ``"passthrough"`` * one of the standard strings in sensor_msgs/image_encodings.h :rtype: A sensor_msgs.msg.Image message :raises CvBridgeError: when the ``cvim`` has a type that is incompatible with ``encoding`` If encoding is ``"passthrough"``, then the message has the same encoding as the image's OpenCV type. Otherwise desired_encoding must be one of the standard image encodings This function returns a sensor_msgs::Image message on success, or raises :exc:`cv_bridge.CvBridgeError` on failure. """ import cv2 import numpy as np if not isinstance(cvim, (np.ndarray, np.generic)): raise TypeError('Your input type is not a numpy array') img_msg = sensor_msgs.msg.Image() img_msg.height = cvim.shape[0] img_msg.width = cvim.shape[1] if len(cvim.shape) < 3: cv_type = self.dtype_with_channels_to_cvtype2(cvim.dtype, 1) else: cv_type = self.dtype_with_channels_to_cvtype2(cvim.dtype, cvim.shape[2]) if encoding == "passthrough": img_msg.encoding = cv_type else: img_msg.encoding = encoding # Verify that the supplied encoding is compatible with the type of the OpenCV image if self.cvtype_to_name[self.encoding_to_cvtype2(encoding)] != cv_type: raise CvBridgeError("encoding specified as %s, but image has incompatible type %s" % (encoding, cv_type)) if cvim.dtype.byteorder == '>': img_msg.is_bigendian = True img_msg.data = cvim.tostring() img_msg.step = len(img_msg.data) / img_msg.height return img_msg vision_opencv-1.12.3/cv_bridge/rosdoc.yaml000066400000000000000000000002771302104371500205620ustar00rootroot00000000000000 - builder: doxygen name: C++ API output_dir: c++ file_patterns: '*.c *.cpp *.h *.cc *.hh *.dox' - builder: sphinx name: Python API output_dir: python sphinx_root_dir: doc vision_opencv-1.12.3/cv_bridge/setup.py000066400000000000000000000003431302104371500201110ustar00rootroot00000000000000#!/usr/bin/env python from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup d = generate_distutils_setup() d['packages'] = ['cv_bridge'] d['package_dir'] = {'' : 'python/'} setup(**d) vision_opencv-1.12.3/cv_bridge/src/000077500000000000000000000000001302104371500171665ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/src/CMakeLists.txt000066400000000000000000000044641302104371500217360ustar00rootroot00000000000000# add library include_directories(./) add_library(${PROJECT_NAME} cv_bridge.cpp rgb_colors.cpp) add_dependencies(${PROJECT_NAME} ${catkin_EXPORTED_TARGETS}) target_link_libraries(${PROJECT_NAME} ${OpenCV_LIBRARIES} ${catkin_LIBRARIES}) install(TARGETS ${PROJECT_NAME} DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}) if(NOT ANDROID) # add a Boost Python library find_package(PythonInterp REQUIRED) find_package(PythonLibs "${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}") #Get the numpy include directory from its python module if(NOT PYTHON_NUMPY_INCLUDE_DIR) execute_process(COMMAND ${PYTHON_EXECUTABLE} -c "import numpy; print(numpy.get_include())" RESULT_VARIABLE PYTHON_NUMPY_PROCESS OUTPUT_VARIABLE PYTHON_NUMPY_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE) if(PYTHON_NUMPY_PROCESS EQUAL 0) file(TO_CMAKE_PATH "${PYTHON_NUMPY_INCLUDE_DIR}" PYTHON_NUMPY_INCLUDE_CMAKE_PATH) set(PYTHON_NUMPY_INCLUDE_DIR ${PYTHON_NUMPY_INCLUDE_CMAKE_PATH} CACHE PATH "Numpy include directory") else(PYTHON_NUMPY_PROCESS EQUAL 0) message(SEND_ERROR "Could not determine the NumPy include directory, verify that NumPy was installed correctly.") endif(PYTHON_NUMPY_PROCESS EQUAL 0) endif(NOT PYTHON_NUMPY_INCLUDE_DIR) include_directories(${PYTHON_INCLUDE_PATH} ${Boost_INCLUDE_DIRS} ${PYTHON_NUMPY_INCLUDE_DIR}) if (PYTHON_VERSION_MAJOR VERSION_EQUAL 3) add_definitions(-DPYTHON3) endif() if (OpenCV_VERSION_MAJOR VERSION_EQUAL 3) add_library(${PROJECT_NAME}_boost module.cpp module_opencv3.cpp) else() add_library(${PROJECT_NAME}_boost module.cpp module_opencv2.cpp) endif() target_link_libraries(${PROJECT_NAME}_boost ${Boost_LIBRARIES} ${catkin_LIBRARIES} ${PYTHON_LIBRARIES} ${PROJECT_NAME} ) set_target_properties(${PROJECT_NAME}_boost PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CATKIN_DEVEL_PREFIX}/${CATKIN_GLOBAL_PYTHON_DESTINATION}/${PROJECT_NAME}/boost/ PREFIX "" ) if(APPLE) set_target_properties(${PROJECT_NAME}_boost PROPERTIES SUFFIX ".so") endif() install(TARGETS ${PROJECT_NAME}_boost DESTINATION ${CATKIN_PACKAGE_PYTHON_DESTINATION}/boost/) endif() vision_opencv-1.12.3/cv_bridge/src/cv_bridge.cpp000066400000000000000000000617331302104371500216300ustar00rootroot00000000000000/********************************************************************* * Software License Agreement (BSD License) * * Copyright (c) 2011, Willow Garage, Inc. * Copyright (c) 2015, Tal Regev. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Willow Garage nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #include "boost/endian/conversion.hpp" #include #include #include #include #include #include #include #include namespace enc = sensor_msgs::image_encodings; namespace cv_bridge { static int depthStrToInt(const std::string depth) { if (depth == "8U") { return 0; } else if (depth == "8S") { return 1; } else if (depth == "16U") { return 2; } else if (depth == "16S") { return 3; } else if (depth == "32S") { return 4; } else if (depth == "32F") { return 5; } return 6; } int getCvType(const std::string& encoding) { // Check for the most common encodings first if (encoding == enc::BGR8) return CV_8UC3; if (encoding == enc::MONO8) return CV_8UC1; if (encoding == enc::RGB8) return CV_8UC3; if (encoding == enc::MONO16) return CV_16UC1; if (encoding == enc::BGR16) return CV_16UC3; if (encoding == enc::RGB16) return CV_16UC3; if (encoding == enc::BGRA8) return CV_8UC4; if (encoding == enc::RGBA8) return CV_8UC4; if (encoding == enc::BGRA16) return CV_16UC4; if (encoding == enc::RGBA16) return CV_16UC4; // For bayer, return one-channel if (encoding == enc::BAYER_RGGB8) return CV_8UC1; if (encoding == enc::BAYER_BGGR8) return CV_8UC1; if (encoding == enc::BAYER_GBRG8) return CV_8UC1; if (encoding == enc::BAYER_GRBG8) return CV_8UC1; if (encoding == enc::BAYER_RGGB16) return CV_16UC1; if (encoding == enc::BAYER_BGGR16) return CV_16UC1; if (encoding == enc::BAYER_GBRG16) return CV_16UC1; if (encoding == enc::BAYER_GRBG16) return CV_16UC1; // Miscellaneous if (encoding == enc::YUV422) return CV_8UC2; // Check all the generic content encodings boost::cmatch m; if (boost::regex_match(encoding.c_str(), m, boost::regex("(8U|8S|16U|16S|32S|32F|64F)C([0-9]+)"))) { return CV_MAKETYPE(depthStrToInt(m[1].str()), atoi(m[2].str().c_str())); } if (boost::regex_match(encoding.c_str(), m, boost::regex("(8U|8S|16U|16S|32S|32F|64F)"))) { return CV_MAKETYPE(depthStrToInt(m[1].str()), 1); } throw Exception("Unrecognized image encoding [" + encoding + "]"); } /// @cond DOXYGEN_IGNORE enum Encoding { INVALID = -1, GRAY = 0, RGB, BGR, RGBA, BGRA, YUV422, BAYER_RGGB, BAYER_BGGR, BAYER_GBRG, BAYER_GRBG}; Encoding getEncoding(const std::string& encoding) { if ((encoding == enc::MONO8) || (encoding == enc::MONO16)) return GRAY; if ((encoding == enc::BGR8) || (encoding == enc::BGR16)) return BGR; if ((encoding == enc::RGB8) || (encoding == enc::RGB16)) return RGB; if ((encoding == enc::BGRA8) || (encoding == enc::BGRA16)) return BGRA; if ((encoding == enc::RGBA8) || (encoding == enc::RGBA16)) return RGBA; if (encoding == enc::YUV422) return YUV422; if ((encoding == enc::BAYER_RGGB8) || (encoding == enc::BAYER_RGGB16)) return BAYER_RGGB; if ((encoding == enc::BAYER_BGGR8) || (encoding == enc::BAYER_BGGR16)) return BAYER_BGGR; if ((encoding == enc::BAYER_GBRG8) || (encoding == enc::BAYER_GBRG16)) return BAYER_GBRG; if ((encoding == enc::BAYER_GRBG8) || (encoding == enc::BAYER_GRBG16)) return BAYER_GRBG; // We don't support conversions to/from other types return INVALID; } static const int SAME_FORMAT = -1; /** Return a lit of OpenCV conversion codes to get from one Format to the other * The key is a pair: and the value a succession of OpenCV code conversion * It's not efficient code but it is only called once and the structure is small enough */ std::map, std::vector > getConversionCodes() { std::map, std::vector > res; for(int i=0; i<=5; ++i) res[std::pair(Encoding(i),Encoding(i))].push_back(SAME_FORMAT); res[std::make_pair(GRAY, RGB)].push_back(cv::COLOR_GRAY2RGB); res[std::make_pair(GRAY, BGR)].push_back(cv::COLOR_GRAY2BGR); res[std::make_pair(GRAY, RGBA)].push_back(cv::COLOR_GRAY2RGBA); res[std::make_pair(GRAY, BGRA)].push_back(cv::COLOR_GRAY2BGRA); res[std::make_pair(RGB, GRAY)].push_back(cv::COLOR_RGB2GRAY); res[std::make_pair(RGB, BGR)].push_back(cv::COLOR_RGB2BGR); res[std::make_pair(RGB, RGBA)].push_back(cv::COLOR_RGB2RGBA); res[std::make_pair(RGB, BGRA)].push_back(cv::COLOR_RGB2BGRA); res[std::make_pair(BGR, GRAY)].push_back(cv::COLOR_BGR2GRAY); res[std::make_pair(BGR, RGB)].push_back(cv::COLOR_BGR2RGB); res[std::make_pair(BGR, RGBA)].push_back(cv::COLOR_BGR2RGBA); res[std::make_pair(BGR, BGRA)].push_back(cv::COLOR_BGR2BGRA); res[std::make_pair(RGBA, GRAY)].push_back(cv::COLOR_RGBA2GRAY); res[std::make_pair(RGBA, RGB)].push_back(cv::COLOR_RGBA2RGB); res[std::make_pair(RGBA, BGR)].push_back(cv::COLOR_RGBA2BGR); res[std::make_pair(RGBA, BGRA)].push_back(cv::COLOR_RGBA2BGRA); res[std::make_pair(BGRA, GRAY)].push_back(cv::COLOR_BGRA2GRAY); res[std::make_pair(BGRA, RGB)].push_back(cv::COLOR_BGRA2RGB); res[std::make_pair(BGRA, BGR)].push_back(cv::COLOR_BGRA2BGR); res[std::make_pair(BGRA, RGBA)].push_back(cv::COLOR_BGRA2RGBA); res[std::make_pair(YUV422, GRAY)].push_back(cv::COLOR_YUV2GRAY_UYVY); res[std::make_pair(YUV422, RGB)].push_back(cv::COLOR_YUV2RGB_UYVY); res[std::make_pair(YUV422, BGR)].push_back(cv::COLOR_YUV2BGR_UYVY); res[std::make_pair(YUV422, RGBA)].push_back(cv::COLOR_YUV2RGBA_UYVY); res[std::make_pair(YUV422, BGRA)].push_back(cv::COLOR_YUV2BGRA_UYVY); // Deal with Bayer res[std::make_pair(BAYER_RGGB, GRAY)].push_back(cv::COLOR_BayerBG2GRAY); res[std::make_pair(BAYER_RGGB, RGB)].push_back(cv::COLOR_BayerBG2RGB); res[std::make_pair(BAYER_RGGB, BGR)].push_back(cv::COLOR_BayerBG2BGR); res[std::make_pair(BAYER_BGGR, GRAY)].push_back(cv::COLOR_BayerRG2GRAY); res[std::make_pair(BAYER_BGGR, RGB)].push_back(cv::COLOR_BayerRG2RGB); res[std::make_pair(BAYER_BGGR, BGR)].push_back(cv::COLOR_BayerRG2BGR); res[std::make_pair(BAYER_GBRG, GRAY)].push_back(cv::COLOR_BayerGR2GRAY); res[std::make_pair(BAYER_GBRG, RGB)].push_back(cv::COLOR_BayerGR2RGB); res[std::make_pair(BAYER_GBRG, BGR)].push_back(cv::COLOR_BayerGR2BGR); res[std::make_pair(BAYER_GRBG, GRAY)].push_back(cv::COLOR_BayerGB2GRAY); res[std::make_pair(BAYER_GRBG, RGB)].push_back(cv::COLOR_BayerGB2RGB); res[std::make_pair(BAYER_GRBG, BGR)].push_back(cv::COLOR_BayerGB2BGR); return res; } const std::vector getConversionCode(std::string src_encoding, std::string dst_encoding) { Encoding src_encod = getEncoding(src_encoding); Encoding dst_encod = getEncoding(dst_encoding); bool is_src_color_format = enc::isColor(src_encoding) || enc::isMono(src_encoding) || enc::isBayer(src_encoding) || (src_encoding == enc::YUV422); bool is_dst_color_format = enc::isColor(dst_encoding) || enc::isMono(dst_encoding) || enc::isBayer(dst_encoding) || (dst_encoding == enc::YUV422); bool is_num_channels_the_same = (enc::numChannels(src_encoding) == enc::numChannels(dst_encoding)); // If we have no color info in the source, we can only convert to the same format which // was resolved in the previous condition. Otherwise, fail if (!is_src_color_format) { if (is_dst_color_format) throw Exception("[" + src_encoding + "] is not a color format. but [" + dst_encoding + "] is. The conversion does not make sense"); if (!is_num_channels_the_same) throw Exception("[" + src_encoding + "] and [" + dst_encoding + "] do not have the same number of channel"); return std::vector(1, SAME_FORMAT); } // If we are converting from a color type to a non color type, we can only do so if we stick // to the number of channels if (!is_dst_color_format) { if (!is_num_channels_the_same) throw Exception("[" + src_encoding + "] is a color format but [" + dst_encoding + "] " + "is not so they must have the same OpenCV type, CV_8UC3, CV16UC1 ...."); return std::vector(1, SAME_FORMAT); } // If we are converting from a color type to another type, then everything is fine static const std::map, std::vector > CONVERSION_CODES = getConversionCodes(); std::pair key(src_encod, dst_encod); std::map, std::vector >::const_iterator val = CONVERSION_CODES.find(key); if (val == CONVERSION_CODES.end()) throw Exception("Unsupported conversion from [" + src_encoding + "] to [" + dst_encoding + "]"); // And deal with depth differences if the colors are different std::vector res = val->second; if ((enc::bitDepth(src_encoding) != enc::bitDepth(dst_encoding)) && (getEncoding(src_encoding) != getEncoding(dst_encoding))) res.push_back(SAME_FORMAT); return res; } /////////////////////////////////////// Image /////////////////////////////////////////// // Converts a ROS Image to a cv::Mat by sharing the data or chaning its endianness if needed cv::Mat matFromImage(const sensor_msgs::Image& source) { int source_type = getCvType(source.encoding); int byte_depth = enc::bitDepth(source.encoding) / 8; int num_channels = enc::numChannels(source.encoding); if (source.step < source.width * byte_depth * num_channels) { std::stringstream ss; ss << "Image is wrongly formed: step < width * byte_depth * num_channels or " << source.step << " != " << source.width << " * " << byte_depth << " * " << num_channels; throw Exception(ss.str()); } if (source.height * source.step != source.data.size()) { std::stringstream ss; ss << "Image is wrongly formed: height * step != size or " << source.height << " * " << source.step << " != " << source.data.size(); throw Exception(ss.str()); } // If the endianness is the same as locally, share the data cv::Mat mat(source.height, source.width, source_type, const_cast(&source.data[0]), source.step); if ((boost::endian::order::native == boost::endian::order::big && source.is_bigendian) || (boost::endian::order::native == boost::endian::order::little && !source.is_bigendian) || byte_depth == 1) return mat; // Otherwise, reinterpret the data as bytes and switch the channels accordingly mat = cv::Mat(source.height, source.width, CV_MAKETYPE(CV_8U, num_channels*byte_depth), const_cast(&source.data[0]), source.step); cv::Mat mat_swap(source.height, source.width, mat.type()); std::vector fromTo; fromTo.reserve(num_channels*byte_depth); for(int i = 0; i < num_channels; ++i) for(int j = 0; j < byte_depth; ++j) { fromTo.push_back(byte_depth*i + j); fromTo.push_back(byte_depth*i + byte_depth - 1 - j); } cv::mixChannels(std::vector(1, mat), std::vector(1, mat_swap), fromTo); // Interpret mat_swap back as the proper type mat_swap = cv::Mat(source.height, source.width, source_type, mat_swap.data, mat_swap.step); return mat_swap; } // Internal, used by toCvCopy and cvtColor CvImagePtr toCvCopyImpl(const cv::Mat& source, const std_msgs::Header& src_header, const std::string& src_encoding, const std::string& dst_encoding) { // Copy metadata CvImagePtr ptr = boost::make_shared(); ptr->header = src_header; // Copy to new buffer if same encoding requested if (dst_encoding.empty() || dst_encoding == src_encoding) { ptr->encoding = src_encoding; source.copyTo(ptr->image); } else { // Convert the source data to the desired encoding const std::vector conversion_codes = getConversionCode(src_encoding, dst_encoding); cv::Mat image1 = source; cv::Mat image2; for(size_t i=0; iimage = image2; ptr->encoding = dst_encoding; } return ptr; } /// @endcond sensor_msgs::ImagePtr CvImage::toImageMsg() const { sensor_msgs::ImagePtr ptr = boost::make_shared(); toImageMsg(*ptr); return ptr; } void CvImage::toImageMsg(sensor_msgs::Image& ros_image) const { ros_image.header = header; ros_image.height = image.rows; ros_image.width = image.cols; ros_image.encoding = encoding; ros_image.is_bigendian = (boost::endian::order::native == boost::endian::order::big); ros_image.step = image.cols * image.elemSize(); size_t size = ros_image.step * image.rows; ros_image.data.resize(size); if (image.isContinuous()) { memcpy((char*)(&ros_image.data[0]), image.data, size); } else { // Copy by row by row uchar* ros_data_ptr = (uchar*)(&ros_image.data[0]); uchar* cv_data_ptr = image.data; for (int i = 0; i < image.rows; ++i) { memcpy(ros_data_ptr, cv_data_ptr, ros_image.step); ros_data_ptr += ros_image.step; cv_data_ptr += image.step; } } } // Deep copy data, returnee is mutable CvImagePtr toCvCopy(const sensor_msgs::ImageConstPtr& source, const std::string& encoding) { return toCvCopy(*source, encoding); } CvImagePtr toCvCopy(const sensor_msgs::Image& source, const std::string& encoding) { // Construct matrix pointing to source data return toCvCopyImpl(matFromImage(source), source.header, source.encoding, encoding); } // Share const data, returnee is immutable CvImageConstPtr toCvShare(const sensor_msgs::ImageConstPtr& source, const std::string& encoding) { return toCvShare(*source, source, encoding); } CvImageConstPtr toCvShare(const sensor_msgs::Image& source, const boost::shared_ptr& tracked_object, const std::string& encoding) { // If the encoding different or the endianness different, you have to copy if ((!encoding.empty() && source.encoding != encoding) || (source.is_bigendian && (boost::endian::order::native != boost::endian::order::big))) return toCvCopy(source, encoding); CvImagePtr ptr = boost::make_shared(); ptr->header = source.header; ptr->encoding = source.encoding; ptr->tracked_object_ = tracked_object; ptr->image = matFromImage(source); return ptr; } CvImagePtr cvtColor(const CvImageConstPtr& source, const std::string& encoding) { return toCvCopyImpl(source->image, source->header, source->encoding, encoding); } /////////////////////////////////////// CompressedImage /////////////////////////////////////////// cv::Mat matFromImage(const sensor_msgs::CompressedImage& source) { cv::Mat jpegData(1,source.data.size(),CV_8UC1); jpegData.data = const_cast(&source.data[0]); cv::InputArray data(jpegData); cv::Mat bgrMat = cv::imdecode(data,cv::IMREAD_ANYCOLOR); return bgrMat; } sensor_msgs::CompressedImagePtr CvImage::toCompressedImageMsg(const Format dst_format) const { sensor_msgs::CompressedImagePtr ptr = boost::make_shared(); toCompressedImageMsg(*ptr,dst_format); return ptr; } std::string getFormat(Format format) { switch (format) { case DIB: return "dib"; case BMP: return "bmp"; case JPG: return "jpg"; case JPEG: return "jpeg"; case JPE: return "jpe"; case JP2: return "jp2"; case PNG: return "png"; case PBM: return "pbm"; case PGM: return "pgm"; case PPM: return "ppm"; case RAS: return "ras"; case SR: return "sr"; case TIF: return "tif"; case TIFF: return "tiff"; } throw Exception("Unrecognized image format"); } void CvImage::toCompressedImageMsg(sensor_msgs::CompressedImage& ros_image, const Format dst_format) const { ros_image.header = header; cv::Mat image; if(encoding != enc::BGR8) { CvImagePtr tempThis = boost::make_shared(*this); CvImagePtr temp = cvtColor(tempThis,enc::BGR8); image = temp->image; } else { image = this->image; } std::vector buf; std::string format = getFormat(dst_format); ros_image.format = format; cv::imencode("." + format, image, buf); ros_image.data = buf; } // Deep copy data, returnee is mutable CvImagePtr toCvCopy(const sensor_msgs::CompressedImageConstPtr& source, const std::string& encoding) { return toCvCopy(*source, encoding); } CvImagePtr toCvCopy(const sensor_msgs::CompressedImage& source, const std::string& encoding) { // Construct matrix pointing to source data return toCvCopyImpl(matFromImage(source), source.header, enc::BGR8, encoding); } CvImageConstPtr cvtColorForDisplay(const CvImageConstPtr& source, const std::string& encoding_out, const CvtColorForDisplayOptions options) { double min_image_value = options.min_image_value; double max_image_value = options.max_image_value; if (!source) throw Exception("cv_bridge.cvtColorForDisplay() called with empty image."); // let's figure out what to do with the empty encoding std::string encoding = encoding_out; if (encoding.empty()) { try { // Let's decide upon an output format if (enc::numChannels(source->encoding) == 1) { if ((source->encoding == enc::TYPE_32SC1) || (enc::bitDepth(source->encoding) == 8) || (enc::bitDepth(source->encoding) == 16) || (enc::bitDepth(source->encoding) == 32)) encoding = enc::BGR8; else throw std::runtime_error("Unsupported depth of the source encoding " + encoding); } else { // We choose BGR by default here as we assume people will use OpenCV if ((enc::bitDepth(source->encoding) == 8) || (enc::bitDepth(source->encoding) == 16)) encoding = enc::BGR8; else throw std::runtime_error("Unsupported depth of the source encoding " + encoding); } } // We could have cv_bridge exception or std_runtime_error from sensor_msgs::image_codings routines catch (const std::runtime_error& e) { throw Exception("cv_bridge.cvtColorForDisplay() output encoding is empty and cannot be guessed."); } } else { if ((!enc::isColor(encoding_out) && !enc::isMono(encoding_out)) || (enc::bitDepth(encoding) != 8)) throw Exception("cv_bridge.cvtColorForDisplay() does not have an output encoding that is color or mono, and has is bit in depth"); } // Convert label to bgr image if (encoding == sensor_msgs::image_encodings::BGR8 && source->encoding == enc::TYPE_32SC1) { CvImagePtr result(new CvImage()); result->header = source->header; result->encoding = encoding; result->image = cv::Mat(source->image.rows, source->image.cols, CV_8UC3); for (size_t j = 0; j < source->image.rows; ++j) { for (size_t i = 0; i < source->image.cols; ++i) { int label = source->image.at(j, i); if (label == options.bg_label) { // background label result->image.at(j, i) = cv::Vec3b(0, 0, 0); } else { cv::Vec3d rgb = rgb_colors::getRGBColor(label); // result image should be BGR result->image.at(j, i) = cv::Vec3b(int(rgb[2] * 255), int(rgb[1] * 255), int(rgb[0] * 255)); } } } return result; } // Perform scaling if asked for if (options.do_dynamic_scaling) { cv::minMaxLoc(source->image, &min_image_value, &max_image_value); if (min_image_value == max_image_value) { CvImagePtr result(new CvImage()); result->header = source->header; result->encoding = encoding; if (enc::bitDepth(encoding) == 1) { result->image = cv::Mat(source->image.size(), CV_8UC1); result->image.setTo(255./2.); } else { result->image = cv::Mat(source->image.size(), CV_8UC3); result->image.setTo(cv::Scalar(1., 1., 1.)*255./2.); } return result; } } if (min_image_value != max_image_value) { if (enc::numChannels(source->encoding) != 1) throw Exception("cv_bridge.cvtColorForDisplay() scaling for images with more than one channel is unsupported"); CvImagePtr img_scaled(new CvImage()); img_scaled->header = source->header; if (options.colormap == -1) { img_scaled->encoding = enc::MONO8; cv::Mat(source->image-min_image_value).convertTo(img_scaled->image, CV_8UC1, 255.0 / (max_image_value - min_image_value)); } else { img_scaled->encoding = enc::BGR8; cv::Mat(source->image-min_image_value).convertTo(img_scaled->image, CV_8UC3, 255.0 / (max_image_value - min_image_value)); cv::applyColorMap(img_scaled->image, img_scaled->image, options.colormap); // Fill black color to the nan region. if (source->encoding == enc::TYPE_32FC1) { for (size_t j = 0; j < source->image.rows; ++j) { for (size_t i = 0; i < source->image.cols; ++i) { float float_value = source->image.at(j, i); if (std::isnan(float_value)) { img_scaled->image.at(j, i) = cv::Vec3b(0, 0, 0); } } } } } return cvtColor(img_scaled, encoding); } // If no color conversion is possible, we must "guess" the input format CvImagePtr source_typed(new CvImage()); source_typed->image = source->image; source_typed->header = source->header; source_typed->encoding = source->encoding; // If we get the OpenCV format, if we have 1,3 or 4 channels, we are most likely in mono, BGR or BGRA modes if (source->encoding == "CV_8UC1") source_typed->encoding = enc::MONO8; else if (source->encoding == "16UC1") source_typed->encoding = enc::MONO16; else if (source->encoding == "CV_8UC3") source_typed->encoding = enc::BGR8; else if (source->encoding == "CV_8UC4") source_typed->encoding = enc::BGRA8; else if (source->encoding == "CV_16UC3") source_typed->encoding = enc::BGR8; else if (source->encoding == "CV_16UC4") source_typed->encoding = enc::BGRA8; // If no conversion is needed, don't convert if (source_typed->encoding == encoding) return source; try { // Now that the output is a proper color format, try to see if any conversion is possible return cvtColor(source_typed, encoding); } catch (cv_bridge::Exception& e) { throw Exception("cv_bridge.cvtColorForDisplay() while trying to convert image from '" + source->encoding + "' to '" + encoding + "' an exception was thrown (" + e.what() + ")"); } } } //namespace cv_bridge vision_opencv-1.12.3/cv_bridge/src/module.cpp000066400000000000000000000114311302104371500211570ustar00rootroot00000000000000/********************************************************************* * Software License Agreement (BSD License) * * Copyright (c) 2012, Willow Garage, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Willow Garage nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #include "module.hpp" PyObject *mod_opencv; bp::object cvtColor2Wrap(bp::object obj_in, const std::string & encoding_in, const std::string & encoding_out) { // Convert the Python input to an image cv::Mat mat_in; convert_to_CvMat2(obj_in.ptr(), mat_in); // Call cv_bridge for color conversion cv_bridge::CvImagePtr cv_image(new cv_bridge::CvImage(std_msgs::Header(), encoding_in, mat_in)); cv::Mat mat = cv_bridge::cvtColor(cv_image, encoding_out)->image; return bp::object(boost::python::handle<>(pyopencv_from(mat))); } bp::object cvtColorForDisplayWrap(bp::object obj_in, const std::string & encoding_in, const std::string & encoding_out, bool do_dynamic_scaling = false, double min_image_value = 0.0, double max_image_value = 0.0) { // Convert the Python input to an image cv::Mat mat_in; convert_to_CvMat2(obj_in.ptr(), mat_in); cv_bridge::CvImagePtr cv_image(new cv_bridge::CvImage(std_msgs::Header(), encoding_in, mat_in)); cv_bridge::CvtColorForDisplayOptions options; options.do_dynamic_scaling = do_dynamic_scaling; options.min_image_value = min_image_value; options.max_image_value = max_image_value; cv::Mat mat = cv_bridge::cvtColorForDisplay(/*source=*/cv_image, /*encoding_out=*/encoding_out, /*options=*/options)->image; return bp::object(boost::python::handle<>(pyopencv_from(mat))); } BOOST_PYTHON_FUNCTION_OVERLOADS(cvtColorForDisplayWrap_overloads, cvtColorForDisplayWrap, 3, 6) int CV_MAT_CNWrap(int i) { return CV_MAT_CN(i); } int CV_MAT_DEPTHWrap(int i) { return CV_MAT_DEPTH(i); } BOOST_PYTHON_MODULE(cv_bridge_boost) { do_numpy_import(); mod_opencv = PyImport_ImportModule("cv2"); // Wrap the function to get encodings as OpenCV types boost::python::def("getCvType", cv_bridge::getCvType); boost::python::def("cvtColor2", cvtColor2Wrap); boost::python::def("CV_MAT_CNWrap", CV_MAT_CNWrap); boost::python::def("CV_MAT_DEPTHWrap", CV_MAT_DEPTHWrap); boost::python::def("cvtColorForDisplay", cvtColorForDisplayWrap, cvtColorForDisplayWrap_overloads( boost::python::args("source", "encoding_in", "encoding_out", "do_dynamic_scaling", "min_image_value", "max_image_value"), "Convert image to display with specified encodings.\n\n" "Args:\n" " - source (numpy.ndarray): input image\n" " - encoding_in (str): input image encoding\n" " - encoding_out (str): encoding to which the image conveted\n" " - do_dynamic_scaling (bool): flag to do dynamic scaling with min/max value\n" " - min_image_value (float): minimum pixel value for dynamic scaling\n" " - max_image_value (float): maximum pixel value for dynamic scaling\n" )); } vision_opencv-1.12.3/cv_bridge/src/module.hpp000066400000000000000000000022301302104371500211610ustar00rootroot00000000000000/* * Copyright 2014 Open Source Robotics Foundation, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef CV_BRIDGE_MODULE_HPP_ #define CV_BRIDGE_MODULE_HPP_ #include #include #include #include #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION #include #include namespace bp = boost::python; int convert_to_CvMat2(const PyObject* o, cv::Mat& m); PyObject* pyopencv_from(const cv::Mat& m); #if PYTHON3 static int do_numpy_import( ) { import_array( ); } #else static void do_numpy_import( ) { import_array( ); } #endif #endif vision_opencv-1.12.3/cv_bridge/src/module_opencv2.cpp000066400000000000000000000171141302104371500226170ustar00rootroot00000000000000/********************************************************************* * Software License Agreement (BSD License) * * Copyright (c) 2012, Willow Garage, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Willow Garage nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #include "module.hpp" // These are sucky, sketchy versions of the real things in OpenCV Python, // inferior in every way. static int failmsg(const char *fmt, ...) { char str[1000]; va_list ap; va_start(ap, fmt); vsnprintf(str, sizeof(str), fmt, ap); va_end(ap); PyErr_SetString(PyExc_TypeError, str); return 0; } // Taken from http://stackoverflow.com/questions/19136944/call-c-opencv-functions-from-python-send-a-cv-mat-to-c-dll-which-is-usi static size_t REFCOUNT_OFFSET = ( size_t )&((( PyObject* )0)->ob_refcnt ) + ( 0x12345678 != *( const size_t* )"\x78\x56\x34\x12\0\0\0\0\0" )*sizeof( int ); static inline PyObject* pyObjectFromRefcount( const int* refcount ) { return ( PyObject* )(( size_t )refcount - REFCOUNT_OFFSET ); } static inline int* refcountFromPyObject( const PyObject* obj ) { return ( int* )(( size_t )obj + REFCOUNT_OFFSET ); } class NumpyAllocator : public cv::MatAllocator { public: NumpyAllocator( ) { } ~NumpyAllocator( ) { } void allocate( int dims, const int* sizes, int type, int*& refcount, uchar*& datastart, uchar*& data, size_t* step ); void deallocate( int* refcount, uchar* datastart, uchar* data ); }; void NumpyAllocator::allocate( int dims, const int* sizes, int type, int*& refcount, uchar*& datastart, uchar*& data, size_t* step ) { int depth = CV_MAT_DEPTH( type ); int cn = CV_MAT_CN( type ); const int f = ( int )( sizeof( size_t )/8 ); int typenum = depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE : depth == CV_16U ? NPY_USHORT : depth == CV_16S ? NPY_SHORT : depth == CV_32S ? NPY_INT : depth == CV_32F ? NPY_FLOAT : depth == CV_64F ? NPY_DOUBLE : f*NPY_ULONGLONG + (f^1)*NPY_UINT; int i; npy_intp _sizes[CV_MAX_DIM+1]; for( i = 0; i < dims; i++ ) _sizes[i] = sizes[i]; if( cn > 1 ) { /*if( _sizes[dims-1] == 1 ) _sizes[dims-1] = cn; else*/ _sizes[dims++] = cn; } PyObject* o = PyArray_SimpleNew( dims, _sizes, typenum ); if( !o ) CV_Error_(CV_StsError, ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims)); refcount = refcountFromPyObject(o); npy_intp* _strides = PyArray_STRIDES((PyArrayObject*) o); for( i = 0; i < dims - (cn > 1); i++ ) step[i] = (size_t)_strides[i]; datastart = data = (uchar*)PyArray_DATA((PyArrayObject*)o); } void NumpyAllocator::deallocate( int* refcount, uchar* datastart, uchar* data ) { if( !refcount ) return; PyObject* o = pyObjectFromRefcount(refcount); Py_INCREF(o); Py_DECREF(o); } // Declare the object NumpyAllocator g_numpyAllocator; int convert_to_CvMat2(const PyObject* o, cv::Mat& m) { // to avoid PyArray_Check() to crash even with valid array do_numpy_import(); if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( !PyArray_Check(o) ) { failmsg("Not a numpy array"); return false; } // NPY_LONG (64 bit) is converted to CV_32S (32 bit) int typenum = PyArray_TYPE((PyArrayObject*) o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("data type = %d is not supported", typenum); return false; } int ndims = PyArray_NDIM((PyArrayObject*) o); if(ndims >= CV_MAX_DIM) { failmsg("dimensionality (=%d) is too high", ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS((PyArrayObject*) o); const npy_intp* _strides = PyArray_STRIDES((PyArrayObject*) o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 ) { failmsg("more than 2 dimensions"); return false; } m = cv::Mat(ndims, size, type, PyArray_DATA((PyArrayObject*) o), step); if( m.data ) { m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) }; m.allocator = &g_numpyAllocator; if( transposed ) { cv::Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return true; } PyObject* pyopencv_from(const cv::Mat& mat) { npy_intp dims[] = {mat.rows, mat.cols, mat.channels()}; PyObject *res = 0 ; if (mat.depth() == CV_8U) res = PyArray_SimpleNew(3, dims, NPY_UBYTE); else if (mat.depth() == CV_8S) res = PyArray_SimpleNew(3, dims, NPY_BYTE); else if (mat.depth() == CV_16S) res = PyArray_SimpleNew(3, dims, NPY_SHORT); else if (mat.depth() == CV_16U) res = PyArray_SimpleNew(3, dims, NPY_USHORT); else if (mat.depth() == CV_32S) res = PyArray_SimpleNew(3, dims, NPY_INT); else if (mat.depth() == CV_32F) res = PyArray_SimpleNew(3, dims, NPY_CFLOAT); else if (mat.depth() == CV_64F) res = PyArray_SimpleNew(3, dims, NPY_CDOUBLE); std::memcpy(PyArray_DATA((PyArrayObject*)res), mat.data, mat.step*mat.rows); return res; } vision_opencv-1.12.3/cv_bridge/src/module_opencv3.cpp000066400000000000000000000225231302104371500226200ustar00rootroot00000000000000// Taken from opencv/modules/python/src2/cv2.cpp #include "module.hpp" #include "opencv2/core/types_c.h" #include "opencv2/opencv_modules.hpp" #include "pycompat.hpp" static PyObject* opencv_error = 0; static int failmsg(const char *fmt, ...) { char str[1000]; va_list ap; va_start(ap, fmt); vsnprintf(str, sizeof(str), fmt, ap); va_end(ap); PyErr_SetString(PyExc_TypeError, str); return 0; } struct ArgInfo { const char * name; bool outputarg; // more fields may be added if necessary ArgInfo(const char * name_, bool outputarg_) : name(name_) , outputarg(outputarg_) {} // to match with older pyopencv_to function signature operator const char *() const { return name; } }; class PyAllowThreads { public: PyAllowThreads() : _state(PyEval_SaveThread()) {} ~PyAllowThreads() { PyEval_RestoreThread(_state); } private: PyThreadState* _state; }; class PyEnsureGIL { public: PyEnsureGIL() : _state(PyGILState_Ensure()) {} ~PyEnsureGIL() { PyGILState_Release(_state); } private: PyGILState_STATE _state; }; #define ERRWRAP2(expr) \ try \ { \ PyAllowThreads allowThreads; \ expr; \ } \ catch (const cv::Exception &e) \ { \ PyErr_SetString(opencv_error, e.what()); \ return 0; \ } using namespace cv; static PyObject* failmsgp(const char *fmt, ...) { char str[1000]; va_list ap; va_start(ap, fmt); vsnprintf(str, sizeof(str), fmt, ap); va_end(ap); PyErr_SetString(PyExc_TypeError, str); return 0; } class NumpyAllocator : public MatAllocator { public: NumpyAllocator() { stdAllocator = Mat::getStdAllocator(); } ~NumpyAllocator() {} UMatData* allocate(PyObject* o, int dims, const int* sizes, int type, size_t* step) const { UMatData* u = new UMatData(this); u->data = u->origdata = (uchar*)PyArray_DATA((PyArrayObject*) o); npy_intp* _strides = PyArray_STRIDES((PyArrayObject*) o); for( int i = 0; i < dims - 1; i++ ) step[i] = (size_t)_strides[i]; step[dims-1] = CV_ELEM_SIZE(type); u->size = sizes[0]*step[0]; u->userdata = o; return u; } UMatData* allocate(int dims0, const int* sizes, int type, void* data, size_t* step, int flags, UMatUsageFlags usageFlags) const { if( data != 0 ) { CV_Error(Error::StsAssert, "The data should normally be NULL!"); // probably this is safe to do in such extreme case return stdAllocator->allocate(dims0, sizes, type, data, step, flags, usageFlags); } PyEnsureGIL gil; int depth = CV_MAT_DEPTH(type); int cn = CV_MAT_CN(type); const int f = (int)(sizeof(size_t)/8); int typenum = depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE : depth == CV_16U ? NPY_USHORT : depth == CV_16S ? NPY_SHORT : depth == CV_32S ? NPY_INT : depth == CV_32F ? NPY_FLOAT : depth == CV_64F ? NPY_DOUBLE : f*NPY_ULONGLONG + (f^1)*NPY_UINT; int i, dims = dims0; cv::AutoBuffer _sizes(dims + 1); for( i = 0; i < dims; i++ ) _sizes[i] = sizes[i]; if( cn > 1 ) _sizes[dims++] = cn; PyObject* o = PyArray_SimpleNew(dims, _sizes, typenum); if(!o) CV_Error_(Error::StsError, ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims)); return allocate(o, dims0, sizes, type, step); } bool allocate(UMatData* u, int accessFlags, UMatUsageFlags usageFlags) const { return stdAllocator->allocate(u, accessFlags, usageFlags); } void deallocate(UMatData* u) const { if(u) { PyEnsureGIL gil; PyObject* o = (PyObject*)u->userdata; Py_XDECREF(o); delete u; } } const MatAllocator* stdAllocator; }; NumpyAllocator g_numpyAllocator; template static bool pyopencv_to(PyObject* obj, T& p, const char* name = ""); template static PyObject* pyopencv_from(const T& src); enum { ARG_NONE = 0, ARG_MAT = 1, ARG_SCALAR = 2 }; // special case, when the convertor needs full ArgInfo structure static bool pyopencv_to(PyObject* o, Mat& m, const ArgInfo info) { // to avoid PyArray_Check() to crash even with valid array do_numpy_import( ); bool allowND = true; if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( PyInt_Check(o) ) { double v[] = {(double)PyInt_AsLong((PyObject*)o), 0., 0., 0.}; m = Mat(4, 1, CV_64F, v).clone(); return true; } if( PyFloat_Check(o) ) { double v[] = {PyFloat_AsDouble((PyObject*)o), 0., 0., 0.}; m = Mat(4, 1, CV_64F, v).clone(); return true; } if( PyTuple_Check(o) ) { int i, sz = (int)PyTuple_Size((PyObject*)o); m = Mat(sz, 1, CV_64F); for( i = 0; i < sz; i++ ) { PyObject* oi = PyTuple_GET_ITEM(o, i); if( PyInt_Check(oi) ) m.at(i) = (double)PyInt_AsLong(oi); else if( PyFloat_Check(oi) ) m.at(i) = (double)PyFloat_AsDouble(oi); else { failmsg("%s is not a numerical tuple", info.name); m.release(); return false; } } return true; } if( !PyArray_Check(o) ) { failmsg("%s is not a numpy array, neither a scalar", info.name); return false; } PyArrayObject* oarr = (PyArrayObject*) o; bool needcopy = false, needcast = false; int typenum = PyArray_TYPE(oarr), new_typenum = typenum; int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT ? CV_32S : typenum == NPY_INT32 ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { if( typenum == NPY_INT64 || typenum == NPY_UINT64 || type == NPY_LONG ) { needcopy = needcast = true; new_typenum = NPY_INT; type = CV_32S; } else { failmsg("%s data type = %d is not supported", info.name, typenum); return false; } } #ifndef CV_MAX_DIM const int CV_MAX_DIM = 32; #endif int ndims = PyArray_NDIM(oarr); if(ndims >= CV_MAX_DIM) { failmsg("%s dimensionality (=%d) is too high", info.name, ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1]; size_t elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(oarr); const npy_intp* _strides = PyArray_STRIDES(oarr); bool ismultichannel = ndims == 3 && _sizes[2] <= CV_CN_MAX; for( int i = ndims-1; i >= 0 && !needcopy; i-- ) { // these checks handle cases of // a) multi-dimensional (ndims > 2) arrays, as well as simpler 1- and 2-dimensional cases // b) transposed arrays, where _strides[] elements go in non-descending order // c) flipped arrays, where some of _strides[] elements are negative if( (i == ndims-1 && (size_t)_strides[i] != elemsize) || (i < ndims-1 && _strides[i] < _strides[i+1]) ) needcopy = true; } if( ismultichannel && _strides[1] != (npy_intp)elemsize*_sizes[2] ) needcopy = true; if (needcopy) { if (info.outputarg) { failmsg("Layout of the output array %s is incompatible with cv::Mat (step[ndims-1] != elemsize or step[1] != elemsize*nchannels)", info.name); return false; } if( needcast ) { o = PyArray_Cast(oarr, new_typenum); oarr = (PyArrayObject*) o; } else { oarr = PyArray_GETCONTIGUOUS(oarr); o = (PyObject*) oarr; } _strides = PyArray_STRIDES(oarr); } for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } // handle degenerate case if( ndims == 0) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ismultichannel ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 && !allowND ) { failmsg("%s has more than 2 dimensions", info.name); return false; } m = Mat(ndims, size, type, PyArray_DATA(oarr), step); m.u = g_numpyAllocator.allocate(o, ndims, size, type, step); m.addref(); if( !needcopy ) { Py_INCREF(o); } m.allocator = &g_numpyAllocator; return true; } template<> bool pyopencv_to(PyObject* o, Mat& m, const char* name) { return pyopencv_to(o, m, ArgInfo(name, 0)); } PyObject* pyopencv_from(const Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp, *p = (Mat*)&m; if(!p->u || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; ERRWRAP2(m.copyTo(temp)); p = &temp; } PyObject* o = (PyObject*)p->u->userdata; Py_INCREF(o); return o; } int convert_to_CvMat2(const PyObject* o, cv::Mat& m) { pyopencv_to(const_cast(o), m, "unknown"); return 0; } vision_opencv-1.12.3/cv_bridge/src/pycompat.hpp000066400000000000000000000060011302104371500215300ustar00rootroot00000000000000/*M/////////////////////////////////////////////////////////////////////////////////////// // // IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING. // // By downloading, copying, installing or using the software you agree to this license. // If you do not agree to this license, do not download, install, // copy or use the software. // // // License Agreement // For Open Source Computer Vision Library // // Copyright (C) 2000-2008, Intel Corporation, all rights reserved. // Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved. // Third party copyrights are property of their respective owners. // // Redistribution and use in source and binary forms, with or without modification, // are permitted provided that the following conditions are met: // // * Redistribution's of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // * Redistribution's in binary form must reproduce the above copyright notice, // this list of conditions and the following disclaimer in the documentation // and/or other materials provided with the distribution. // // * The name of the copyright holders may not be used to endorse or promote products // derived from this software without specific prior written permission. // // This software is provided by the copyright holders and contributors "as is" and // any express or implied warranties, including, but not limited to, the implied // warranties of merchantability and fitness for a particular purpose are disclaimed. // In no event shall the Intel Corporation or contributors be liable for any direct, // indirect, incidental, special, exemplary, or consequential damages // (including, but not limited to, procurement of substitute goods or services; // loss of use, data, or profits; or business interruption) however caused // and on any theory of liability, whether in contract, strict liability, // or tort (including negligence or otherwise) arising in any way out of // the use of this software, even if advised of the possibility of such damage. // //M*/ // Defines for Python 2/3 compatibility. #ifndef __PYCOMPAT_HPP__ #define __PYCOMPAT_HPP__ #if PY_MAJOR_VERSION >= 3 // Python3 treats all ints as longs, PyInt_X functions have been removed. #define PyInt_Check PyLong_Check #define PyInt_CheckExact PyLong_CheckExact #define PyInt_AsLong PyLong_AsLong #define PyInt_AS_LONG PyLong_AS_LONG #define PyInt_FromLong PyLong_FromLong #define PyNumber_Int PyNumber_Long // Python3 strings are unicode, these defines mimic the Python2 functionality. #define PyString_Check PyUnicode_Check #define PyString_FromString PyUnicode_FromString #define PyString_FromStringAndSize PyUnicode_FromStringAndSize #define PyString_Size PyUnicode_GET_SIZE // PyUnicode_AsUTF8 isn't available until Python 3.3 #if (PY_VERSION_HEX < 0x03030000) #define PyString_AsString _PyUnicode_AsString #else #define PyString_AsString PyUnicode_AsUTF8 #endif #endif #endif // END HEADER GUARD vision_opencv-1.12.3/cv_bridge/src/rgb_colors.cpp000066400000000000000000000255741302104371500220420ustar00rootroot00000000000000// -*- mode: c++ -*- /********************************************************************* * Original color definition is at scikit-image distributed with * following license disclaimer: * * Copyright (C) 2011, the scikit-image team * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * 3. Neither the name of skimage nor the names of its contributors may be * used to endorse or promote products derived from this software without * specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #include "cv_bridge/rgb_colors.h" #include namespace cv_bridge { namespace rgb_colors { cv::Vec3d getRGBColor(const int color) { cv::Vec3d c; switch (color % 146) { case ALICEBLUE: c = cv::Vec3d(0.941, 0.973, 1); break; case ANTIQUEWHITE: c = cv::Vec3d(0.98, 0.922, 0.843); break; case AQUA: c = cv::Vec3d(0, 1, 1); break; case AQUAMARINE: c = cv::Vec3d(0.498, 1, 0.831); break; case AZURE: c = cv::Vec3d(0.941, 1, 1); break; case BEIGE: c = cv::Vec3d(0.961, 0.961, 0.863); break; case BISQUE: c = cv::Vec3d(1, 0.894, 0.769); break; case BLACK: c = cv::Vec3d(0, 0, 0); break; case BLANCHEDALMOND: c = cv::Vec3d(1, 0.922, 0.804); break; case BLUE: c = cv::Vec3d(0, 0, 1); break; case BLUEVIOLET: c = cv::Vec3d(0.541, 0.169, 0.886); break; case BROWN: c = cv::Vec3d(0.647, 0.165, 0.165); break; case BURLYWOOD: c = cv::Vec3d(0.871, 0.722, 0.529); break; case CADETBLUE: c = cv::Vec3d(0.373, 0.62, 0.627); break; case CHARTREUSE: c = cv::Vec3d(0.498, 1, 0); break; case CHOCOLATE: c = cv::Vec3d(0.824, 0.412, 0.118); break; case CORAL: c = cv::Vec3d(1, 0.498, 0.314); break; case CORNFLOWERBLUE: c = cv::Vec3d(0.392, 0.584, 0.929); break; case CORNSILK: c = cv::Vec3d(1, 0.973, 0.863); break; case CRIMSON: c = cv::Vec3d(0.863, 0.0784, 0.235); break; case CYAN: c = cv::Vec3d(0, 1, 1); break; case DARKBLUE: c = cv::Vec3d(0, 0, 0.545); break; case DARKCYAN: c = cv::Vec3d(0, 0.545, 0.545); break; case DARKGOLDENROD: c = cv::Vec3d(0.722, 0.525, 0.0431); break; case DARKGRAY: c = cv::Vec3d(0.663, 0.663, 0.663); break; case DARKGREEN: c = cv::Vec3d(0, 0.392, 0); break; case DARKGREY: c = cv::Vec3d(0.663, 0.663, 0.663); break; case DARKKHAKI: c = cv::Vec3d(0.741, 0.718, 0.42); break; case DARKMAGENTA: c = cv::Vec3d(0.545, 0, 0.545); break; case DARKOLIVEGREEN: c = cv::Vec3d(0.333, 0.42, 0.184); break; case DARKORANGE: c = cv::Vec3d(1, 0.549, 0); break; case DARKORCHID: c = cv::Vec3d(0.6, 0.196, 0.8); break; case DARKRED: c = cv::Vec3d(0.545, 0, 0); break; case DARKSALMON: c = cv::Vec3d(0.914, 0.588, 0.478); break; case DARKSEAGREEN: c = cv::Vec3d(0.561, 0.737, 0.561); break; case DARKSLATEBLUE: c = cv::Vec3d(0.282, 0.239, 0.545); break; case DARKSLATEGRAY: c = cv::Vec3d(0.184, 0.31, 0.31); break; case DARKSLATEGREY: c = cv::Vec3d(0.184, 0.31, 0.31); break; case DARKTURQUOISE: c = cv::Vec3d(0, 0.808, 0.82); break; case DARKVIOLET: c = cv::Vec3d(0.58, 0, 0.827); break; case DEEPPINK: c = cv::Vec3d(1, 0.0784, 0.576); break; case DEEPSKYBLUE: c = cv::Vec3d(0, 0.749, 1); break; case DIMGRAY: c = cv::Vec3d(0.412, 0.412, 0.412); break; case DIMGREY: c = cv::Vec3d(0.412, 0.412, 0.412); break; case DODGERBLUE: c = cv::Vec3d(0.118, 0.565, 1); break; case FIREBRICK: c = cv::Vec3d(0.698, 0.133, 0.133); break; case FLORALWHITE: c = cv::Vec3d(1, 0.98, 0.941); break; case FORESTGREEN: c = cv::Vec3d(0.133, 0.545, 0.133); break; case FUCHSIA: c = cv::Vec3d(1, 0, 1); break; case GAINSBORO: c = cv::Vec3d(0.863, 0.863, 0.863); break; case GHOSTWHITE: c = cv::Vec3d(0.973, 0.973, 1); break; case GOLD: c = cv::Vec3d(1, 0.843, 0); break; case GOLDENROD: c = cv::Vec3d(0.855, 0.647, 0.125); break; case GRAY: c = cv::Vec3d(0.502, 0.502, 0.502); break; case GREEN: c = cv::Vec3d(0, 0.502, 0); break; case GREENYELLOW: c = cv::Vec3d(0.678, 1, 0.184); break; case GREY: c = cv::Vec3d(0.502, 0.502, 0.502); break; case HONEYDEW: c = cv::Vec3d(0.941, 1, 0.941); break; case HOTPINK: c = cv::Vec3d(1, 0.412, 0.706); break; case INDIANRED: c = cv::Vec3d(0.804, 0.361, 0.361); break; case INDIGO: c = cv::Vec3d(0.294, 0, 0.51); break; case IVORY: c = cv::Vec3d(1, 1, 0.941); break; case KHAKI: c = cv::Vec3d(0.941, 0.902, 0.549); break; case LAVENDER: c = cv::Vec3d(0.902, 0.902, 0.98); break; case LAVENDERBLUSH: c = cv::Vec3d(1, 0.941, 0.961); break; case LAWNGREEN: c = cv::Vec3d(0.486, 0.988, 0); break; case LEMONCHIFFON: c = cv::Vec3d(1, 0.98, 0.804); break; case LIGHTBLUE: c = cv::Vec3d(0.678, 0.847, 0.902); break; case LIGHTCORAL: c = cv::Vec3d(0.941, 0.502, 0.502); break; case LIGHTCYAN: c = cv::Vec3d(0.878, 1, 1); break; case LIGHTGOLDENRODYELLOW: c = cv::Vec3d(0.98, 0.98, 0.824); break; case LIGHTGRAY: c = cv::Vec3d(0.827, 0.827, 0.827); break; case LIGHTGREEN: c = cv::Vec3d(0.565, 0.933, 0.565); break; case LIGHTGREY: c = cv::Vec3d(0.827, 0.827, 0.827); break; case LIGHTPINK: c = cv::Vec3d(1, 0.714, 0.757); break; case LIGHTSALMON: c = cv::Vec3d(1, 0.627, 0.478); break; case LIGHTSEAGREEN: c = cv::Vec3d(0.125, 0.698, 0.667); break; case LIGHTSKYBLUE: c = cv::Vec3d(0.529, 0.808, 0.98); break; case LIGHTSLATEGRAY: c = cv::Vec3d(0.467, 0.533, 0.6); break; case LIGHTSLATEGREY: c = cv::Vec3d(0.467, 0.533, 0.6); break; case LIGHTSTEELBLUE: c = cv::Vec3d(0.69, 0.769, 0.871); break; case LIGHTYELLOW: c = cv::Vec3d(1, 1, 0.878); break; case LIME: c = cv::Vec3d(0, 1, 0); break; case LIMEGREEN: c = cv::Vec3d(0.196, 0.804, 0.196); break; case LINEN: c = cv::Vec3d(0.98, 0.941, 0.902); break; case MAGENTA: c = cv::Vec3d(1, 0, 1); break; case MAROON: c = cv::Vec3d(0.502, 0, 0); break; case MEDIUMAQUAMARINE: c = cv::Vec3d(0.4, 0.804, 0.667); break; case MEDIUMBLUE: c = cv::Vec3d(0, 0, 0.804); break; case MEDIUMORCHID: c = cv::Vec3d(0.729, 0.333, 0.827); break; case MEDIUMPURPLE: c = cv::Vec3d(0.576, 0.439, 0.859); break; case MEDIUMSEAGREEN: c = cv::Vec3d(0.235, 0.702, 0.443); break; case MEDIUMSLATEBLUE: c = cv::Vec3d(0.482, 0.408, 0.933); break; case MEDIUMSPRINGGREEN: c = cv::Vec3d(0, 0.98, 0.604); break; case MEDIUMTURQUOISE: c = cv::Vec3d(0.282, 0.82, 0.8); break; case MEDIUMVIOLETRED: c = cv::Vec3d(0.78, 0.0824, 0.522); break; case MIDNIGHTBLUE: c = cv::Vec3d(0.098, 0.098, 0.439); break; case MINTCREAM: c = cv::Vec3d(0.961, 1, 0.98); break; case MISTYROSE: c = cv::Vec3d(1, 0.894, 0.882); break; case MOCCASIN: c = cv::Vec3d(1, 0.894, 0.71); break; case NAVAJOWHITE: c = cv::Vec3d(1, 0.871, 0.678); break; case NAVY: c = cv::Vec3d(0, 0, 0.502); break; case OLDLACE: c = cv::Vec3d(0.992, 0.961, 0.902); break; case OLIVE: c = cv::Vec3d(0.502, 0.502, 0); break; case OLIVEDRAB: c = cv::Vec3d(0.42, 0.557, 0.137); break; case ORANGE: c = cv::Vec3d(1, 0.647, 0); break; case ORANGERED: c = cv::Vec3d(1, 0.271, 0); break; case ORCHID: c = cv::Vec3d(0.855, 0.439, 0.839); break; case PALEGOLDENROD: c = cv::Vec3d(0.933, 0.91, 0.667); break; case PALEGREEN: c = cv::Vec3d(0.596, 0.984, 0.596); break; case PALEVIOLETRED: c = cv::Vec3d(0.686, 0.933, 0.933); break; case PAPAYAWHIP: c = cv::Vec3d(1, 0.937, 0.835); break; case PEACHPUFF: c = cv::Vec3d(1, 0.855, 0.725); break; case PERU: c = cv::Vec3d(0.804, 0.522, 0.247); break; case PINK: c = cv::Vec3d(1, 0.753, 0.796); break; case PLUM: c = cv::Vec3d(0.867, 0.627, 0.867); break; case POWDERBLUE: c = cv::Vec3d(0.69, 0.878, 0.902); break; case PURPLE: c = cv::Vec3d(0.502, 0, 0.502); break; case RED: c = cv::Vec3d(1, 0, 0); break; case ROSYBROWN: c = cv::Vec3d(0.737, 0.561, 0.561); break; case ROYALBLUE: c = cv::Vec3d(0.255, 0.412, 0.882); break; case SADDLEBROWN: c = cv::Vec3d(0.545, 0.271, 0.0745); break; case SALMON: c = cv::Vec3d(0.98, 0.502, 0.447); break; case SANDYBROWN: c = cv::Vec3d(0.98, 0.643, 0.376); break; case SEAGREEN: c = cv::Vec3d(0.18, 0.545, 0.341); break; case SEASHELL: c = cv::Vec3d(1, 0.961, 0.933); break; case SIENNA: c = cv::Vec3d(0.627, 0.322, 0.176); break; case SILVER: c = cv::Vec3d(0.753, 0.753, 0.753); break; case SKYBLUE: c = cv::Vec3d(0.529, 0.808, 0.922); break; case SLATEBLUE: c = cv::Vec3d(0.416, 0.353, 0.804); break; case SLATEGRAY: c = cv::Vec3d(0.439, 0.502, 0.565); break; case SLATEGREY: c = cv::Vec3d(0.439, 0.502, 0.565); break; case SNOW: c = cv::Vec3d(1, 0.98, 0.98); break; case SPRINGGREEN: c = cv::Vec3d(0, 1, 0.498); break; case STEELBLUE: c = cv::Vec3d(0.275, 0.51, 0.706); break; case TAN: c = cv::Vec3d(0.824, 0.706, 0.549); break; case TEAL: c = cv::Vec3d(0, 0.502, 0.502); break; case THISTLE: c = cv::Vec3d(0.847, 0.749, 0.847); break; case TOMATO: c = cv::Vec3d(1, 0.388, 0.278); break; case TURQUOISE: c = cv::Vec3d(0.251, 0.878, 0.816); break; case VIOLET: c = cv::Vec3d(0.933, 0.51, 0.933); break; case WHEAT: c = cv::Vec3d(0.961, 0.871, 0.702); break; case WHITE: c = cv::Vec3d(1, 1, 1); break; case WHITESMOKE: c = cv::Vec3d(0.961, 0.961, 0.961); break; case YELLOW: c = cv::Vec3d(1, 1, 0); break; case YELLOWGREEN: c = cv::Vec3d(0.604, 0.804, 0.196); break; } // switch return c; } } // namespace rgb_colors } // namespace cv_bridge vision_opencv-1.12.3/cv_bridge/test/000077500000000000000000000000001302104371500173565ustar00rootroot00000000000000vision_opencv-1.12.3/cv_bridge/test/CMakeLists.txt000066400000000000000000000006201302104371500221140ustar00rootroot00000000000000# add the tests # add boost directories for now include_directories("../src") catkin_add_gtest(${PROJECT_NAME}-utest test_endian.cpp utest.cpp utest2.cpp test_rgb_colors.cpp) target_link_libraries(${PROJECT_NAME}-utest ${PROJECT_NAME} ${OpenCV_LIBRARIES} ${catkin_LIBRARIES} ) catkin_add_nosetests(enumerants.py) catkin_add_nosetests(conversions.py) catkin_add_nosetests(python_bindings.py) vision_opencv-1.12.3/cv_bridge/test/conversions.py000066400000000000000000000072471302104371500223120ustar00rootroot00000000000000#!/usr/bin/env python import rostest import unittest import numpy as np import sensor_msgs.msg from cv_bridge import CvBridge, CvBridgeError class TestConversions(unittest.TestCase): def test_mono16_cv2(self): import numpy as np br = CvBridge() im = np.uint8(np.random.randint(0, 255, size=(480, 640, 3))) self.assertRaises(CvBridgeError, lambda: br.imgmsg_to_cv2(br.cv2_to_imgmsg(im), "mono16")) br.imgmsg_to_cv2(br.cv2_to_imgmsg(im,"rgb8"), "mono16") def test_encode_decode_cv2(self): import cv2 import numpy as np fmts = [cv2.CV_8U, cv2.CV_8S, cv2.CV_16U, cv2.CV_16S, cv2.CV_32S, cv2.CV_32F, cv2.CV_64F] cvb_en = CvBridge() cvb_de = CvBridge() for w in range(100, 800, 100): for h in range(100, 800, 100): for f in fmts: for channels in ([], 1, 2, 3, 4, 5): if channels == []: original = np.uint8(np.random.randint(0, 255, size=(h, w))) else: original = np.uint8(np.random.randint(0, 255, size=(h, w, channels))) rosmsg = cvb_en.cv2_to_imgmsg(original) newimg = cvb_de.imgmsg_to_cv2(rosmsg) self.assert_(original.dtype == newimg.dtype) if channels == 1: # in that case, a gray image has a shape of size 2 self.assert_(original.shape[:2] == newimg.shape[:2]) else: self.assert_(original.shape == newimg.shape) self.assert_(len(original.tostring()) == len(newimg.tostring())) def test_encode_decode_cv2_compressed(self): import numpy as np # from: http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#Mat imread(const string& filename, int flags) formats = ["jpg", "jpeg", "jpe", "png", "bmp", "dib", "ppm", "pgm", "pbm", "jp2", "sr", "ras", "tif", "tiff"] # this formats rviz is not support cvb_en = CvBridge() cvb_de = CvBridge() for w in range(100, 800, 100): for h in range(100, 800, 100): for f in formats: for channels in ([], 1, 3): if channels == []: original = np.uint8(np.random.randint(0, 255, size=(h, w))) else: original = np.uint8(np.random.randint(0, 255, size=(h, w, channels))) compress_rosmsg = cvb_en.cv2_to_compressed_imgmsg(original, f) newimg = cvb_de.compressed_imgmsg_to_cv2(compress_rosmsg) self.assert_(original.dtype == newimg.dtype) if channels == 1: # in that case, a gray image has a shape of size 2 self.assert_(original.shape[:2] == newimg.shape[:2]) else: self.assert_(original.shape == newimg.shape) self.assert_(len(original.tostring()) == len(newimg.tostring())) def test_endianness(self): br = CvBridge() dtype = np.dtype('int32') # Set to big endian. dtype = dtype.newbyteorder('>') img = np.random.randint(0, 255, size=(30, 40)) msg = br.cv2_to_imgmsg(img.astype(dtype)) self.assert_(msg.is_bigendian == True) self.assert_((br.imgmsg_to_cv2(msg) == img).all()) if __name__ == '__main__': rosunit.unitrun('opencv_tests', 'conversions', TestConversions) vision_opencv-1.12.3/cv_bridge/test/enumerants.py000066400000000000000000000026761302104371500221240ustar00rootroot00000000000000#!/usr/bin/env python import rostest import unittest import numpy as np import cv2 import sensor_msgs.msg from cv_bridge import CvBridge, CvBridgeError, getCvType class TestEnumerants(unittest.TestCase): def test_enumerants_cv2(self): img_msg = sensor_msgs.msg.Image() img_msg.width = 640 img_msg.height = 480 img_msg.encoding = "rgba8" img_msg.step = 640*4 img_msg.data = (640 * 480) * "1234" bridge_ = CvBridge() cvim = bridge_.imgmsg_to_cv2(img_msg, "rgb8") import sys self.assertRaises(sys.getrefcount(cvim) == 2) # A 3 channel image cannot be sent as an rgba8 self.assertRaises(CvBridgeError, lambda: bridge_.cv2_to_imgmsg(cvim, "rgba8")) # but it can be sent as rgb8 and bgr8 bridge_.cv2_to_imgmsg(cvim, "rgb8") bridge_.cv2_to_imgmsg(cvim, "bgr8") self.assertRaises(getCvType("32FC4") == cv2.CV_8UC4) self.assertRaises(getCvType("8UC1") == cv2.CV_8UC1) self.assertRaises(getCvType("8U") == cv2.CV_8UC1) def test_numpy_types(self): import cv2 import numpy as np bridge_ = CvBridge() self.assertRaises(TypeError, lambda: bridge_.cv2_to_imgmsg(1, "rgba8")) if hasattr(cv2, 'cv'): self.assertRaises(TypeError, lambda: bridge_.cv2_to_imgmsg(cv2.cv(), "rgba8")) if __name__ == '__main__': rosunit.unitrun('opencv_tests', 'enumerants', TestEnumerants) vision_opencv-1.12.3/cv_bridge/test/python_bindings.py000066400000000000000000000015671302104371500231370ustar00rootroot00000000000000from nose.tools import assert_equal import numpy as np import cv_bridge def test_cvtColorForDisplay(): # convert label image to display label = np.zeros((480, 640), dtype=np.int32) height, width = label.shape[:2] label_value = 0 grid_num_y, grid_num_x = 3, 4 for grid_row in xrange(grid_num_y): grid_size_y = height / grid_num_y min_y = grid_size_y * grid_row max_y = min_y + grid_size_y for grid_col in xrange(grid_num_x): grid_size_x = width / grid_num_x min_x = grid_size_x * grid_col max_x = min_x + grid_size_x label[min_y:max_y, min_x:max_x] = label_value label_value += 1 label_viz = cv_bridge.cvtColorForDisplay(label, '32SC1', 'bgr8') assert_equal(label_viz.dtype, np.uint8) assert_equal(label_viz.min(), 0) assert_equal(label_viz.max(), 255) vision_opencv-1.12.3/cv_bridge/test/test_endian.cpp000066400000000000000000000022551302104371500223630ustar00rootroot00000000000000#include "boost/endian/conversion.hpp" #include #include #include TEST(CvBridgeTest, endianness) { using namespace boost::endian; // Create an image of the type opposite to the platform sensor_msgs::Image msg; msg.height = 1; msg.width = 1; msg.encoding = "32SC2"; msg.step = 8; msg.data.resize(msg.step); int32_t* data = reinterpret_cast(&msg.data[0]); // Write 1 and 2 in order, but with an endianness opposite to the platform if (order::native == order::little) { msg.is_bigendian = true; *(data++) = native_to_big(static_cast(1)); *data = native_to_big(static_cast(2)); } else { msg.is_bigendian = false; *(data++) = native_to_little(static_cast(1)); *data = native_to_little(static_cast(2)); } // Make sure the values are still the same cv_bridge::CvImageConstPtr img = cv_bridge::toCvShare(boost::make_shared(msg)); EXPECT_EQ(img->image.at(0, 0)[0], 1); EXPECT_EQ(img->image.at(0, 0)[1], 2); // Make sure we cannot share data EXPECT_NE(img->image.data, &msg.data[0]); } vision_opencv-1.12.3/cv_bridge/test/test_rgb_colors.cpp000066400000000000000000000007431302104371500232600ustar00rootroot00000000000000#include "cv_bridge/rgb_colors.h" #include #include TEST(RGBColors, testGetRGBColor) { cv::Vec3d color; // red color = cv_bridge::rgb_colors::getRGBColor(cv_bridge::rgb_colors::RED); EXPECT_EQ(1, color[0]); EXPECT_EQ(0, color[1]); EXPECT_EQ(0, color[2]); // gray color = cv_bridge::rgb_colors::getRGBColor(cv_bridge::rgb_colors::GRAY); EXPECT_EQ(0.502, color[0]); EXPECT_EQ(0.502, color[1]); EXPECT_EQ(0.502, color[2]); } vision_opencv-1.12.3/cv_bridge/test/utest.cpp000066400000000000000000000103431302104371500212270ustar00rootroot00000000000000#include "cv_bridge/cv_bridge.h" #include #include // Tests conversion of non-continuous cv::Mat. #5206 TEST(CvBridgeTest, NonContinuous) { cv::Mat full = cv::Mat::eye(8, 8, CV_16U); cv::Mat partial = full.colRange(2, 5); cv_bridge::CvImage cvi; cvi.encoding = sensor_msgs::image_encodings::MONO16; cvi.image = partial; sensor_msgs::ImagePtr msg = cvi.toImageMsg(); EXPECT_EQ(msg->height, 8); EXPECT_EQ(msg->width, 3); EXPECT_EQ(msg->encoding, cvi.encoding); EXPECT_EQ(msg->step, 6); } TEST(CvBridgeTest, ChannelOrder) { cv::Mat_ mat(200, 200); mat.setTo(cv::Scalar(1000,0,0,0)); sensor_msgs::ImagePtr image(new sensor_msgs::Image()); image = cv_bridge::CvImage(image->header, sensor_msgs::image_encodings::MONO16, mat).toImageMsg(); cv_bridge::CvImageConstPtr cv_ptr = cv_bridge::toCvShare(image); cv_bridge::CvImagePtr res = cv_bridge::cvtColor(cv_ptr, sensor_msgs::image_encodings::BGR8); EXPECT_EQ(res->encoding, sensor_msgs::image_encodings::BGR8); EXPECT_EQ(res->image.type(), cv_bridge::getCvType(res->encoding)); EXPECT_EQ(res->image.channels(), sensor_msgs::image_encodings::numChannels(res->encoding)); EXPECT_EQ(res->image.depth(), CV_8U); // The matrix should be the following cv::Mat_ gt(200, 200); gt.setTo(cv::Scalar(1, 1, 1)*1000.*255./65535.); ASSERT_EQ(res->image.type(), gt.type()); EXPECT_EQ(cv::norm(res->image, gt, cv::NORM_INF), 0); } TEST(CvBridgeTest, initialization) { sensor_msgs::Image image; cv_bridge::CvImagePtr cv_ptr; image.encoding = "bgr8"; image.height = 200; image.width = 200; try { cv_ptr = cv_bridge::toCvCopy(image, "mono8"); // Before the fix, it would never get here, as it would segfault EXPECT_EQ(1, 0); } catch (cv_bridge::Exception& e) { EXPECT_EQ(1, 1); } // Check some normal images with different ratios for(int height = 100; height <= 300; ++height) { image.encoding = sensor_msgs::image_encodings::RGB8; image.step = image.width*3; image.data.resize(image.height*image.step); cv_ptr = cv_bridge::toCvCopy(image, "mono8"); } } TEST(CvBridgeTest, imageMessageStep) { // Test 1: image step is padded sensor_msgs::Image image; cv_bridge::CvImagePtr cv_ptr; image.encoding = "mono8"; image.height = 220; image.width = 200; image.is_bigendian = false; image.step = 208; image.data.resize(image.height*image.step); ASSERT_NO_THROW(cv_ptr = cv_bridge::toCvCopy(image, "mono8")); ASSERT_EQ(220, cv_ptr->image.rows); ASSERT_EQ(200, cv_ptr->image.cols); //OpenCV copyTo argument removes the stride ASSERT_EQ(200, cv_ptr->image.step[0]); //Test 2: image step is invalid image.step = 199; ASSERT_THROW(cv_ptr = cv_bridge::toCvCopy(image, "mono8"), cv_bridge::Exception); //Test 3: image step == image.width * element size * number of channels image.step = 200; image.data.resize(image.height*image.step); ASSERT_NO_THROW(cv_ptr = cv_bridge::toCvCopy(image, "mono8")); ASSERT_EQ(220, cv_ptr->image.rows); ASSERT_EQ(200, cv_ptr->image.cols); ASSERT_EQ(200, cv_ptr->image.step[0]); } TEST(CvBridgeTest, imageMessageConversion) { sensor_msgs::Image imgmsg; cv_bridge::CvImagePtr cv_ptr; imgmsg.height = 220; imgmsg.width = 200; imgmsg.is_bigendian = false; // image with data type float32 and 1 channels imgmsg.encoding = "32FC1"; imgmsg.step = imgmsg.width * 32 / 8 * 1; imgmsg.data.resize(imgmsg.height * imgmsg.step); ASSERT_NO_THROW(cv_ptr = cv_bridge::toCvCopy(imgmsg, imgmsg.encoding)); ASSERT_EQ(imgmsg.height, cv_ptr->image.rows); ASSERT_EQ(imgmsg.width, cv_ptr->image.cols); ASSERT_EQ(1, cv_ptr->image.channels()); ASSERT_EQ(imgmsg.step, cv_ptr->image.step[0]); // image with data type float32 and 10 channels imgmsg.encoding = "32FC10"; imgmsg.step = imgmsg.width * 32 / 8 * 10; imgmsg.data.resize(imgmsg.height * imgmsg.step); ASSERT_NO_THROW(cv_ptr = cv_bridge::toCvCopy(imgmsg, imgmsg.encoding)); ASSERT_EQ(imgmsg.height, cv_ptr->image.rows); ASSERT_EQ(imgmsg.width, cv_ptr->image.cols); ASSERT_EQ(10, cv_ptr->image.channels()); ASSERT_EQ(imgmsg.step, cv_ptr->image.step[0]); } int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } vision_opencv-1.12.3/cv_bridge/test/utest2.cpp000066400000000000000000000172101302104371500213110ustar00rootroot00000000000000/********************************************************************* * Software License Agreement (BSD License) * * Copyright (c) 2009, Willow Garage, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Willow Garage nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #include #include #include #include "opencv2/core/core.hpp" #include "cv_bridge/cv_bridge.h" #include #include using namespace sensor_msgs::image_encodings; bool isUnsigned(const std::string & encoding) { return encoding == RGB8 || encoding == RGBA8 || encoding == RGB16 || encoding == RGBA16 || encoding == BGR8 || encoding == BGRA8 || encoding == BGR16 || encoding == BGRA16 || encoding == MONO8 || encoding == MONO16 || encoding == MONO8 || encoding == MONO16 || encoding == TYPE_8UC1 || encoding == TYPE_8UC2 || encoding == TYPE_8UC3 || encoding == TYPE_8UC4 || encoding == TYPE_16UC1 || encoding == TYPE_16UC2 || encoding == TYPE_16UC3 || encoding == TYPE_16UC4; //BAYER_RGGB8, BAYER_BGGR8, BAYER_GBRG8, BAYER_GRBG8, BAYER_RGGB16, BAYER_BGGR16, BAYER_GBRG16, BAYER_GRBG16, //YUV422 } std::vector getEncodings() { // TODO for Groovy, the following types should be uncommented std::string encodings[] = { RGB8, RGBA8, RGB16, RGBA16, BGR8, BGRA8, BGR16, BGRA16, MONO8, MONO16, TYPE_8UC1, /*TYPE_8UC2,*/ TYPE_8UC3, TYPE_8UC4, TYPE_8SC1, /*TYPE_8SC2,*/ TYPE_8SC3, TYPE_8SC4, TYPE_16UC1, /*TYPE_16UC2,*/ TYPE_16UC3, TYPE_16UC4, TYPE_16SC1, /*TYPE_16SC2,*/ TYPE_16SC3, TYPE_16SC4, TYPE_32SC1, /*TYPE_32SC2,*/ TYPE_32SC3, TYPE_32SC4, TYPE_32FC1, /*TYPE_32FC2,*/ TYPE_32FC3, TYPE_32FC4, TYPE_64FC1, /*TYPE_64FC2,*/ TYPE_64FC3, TYPE_64FC4, //BAYER_RGGB8, BAYER_BGGR8, BAYER_GBRG8, BAYER_GRBG8, BAYER_RGGB16, BAYER_BGGR16, BAYER_GBRG16, BAYER_GRBG16, YUV422 }; return std::vector(encodings, encodings+47-8-7); } TEST(OpencvTests, testCase_encode_decode) { std::vector encodings = getEncodings(); for(size_t i=0; iimage, cv_bridge::Exception); continue; } // We do not support conversion to YUV422 for now, except from YUV422 if ((dst_encoding == YUV422) && (src_encoding != YUV422)) { EXPECT_THROW(cv_bridge::toCvShare(image_msg, dst_encoding), cv_bridge::Exception); continue; } cv_image = cv_bridge::toCvShare(image_msg, dst_encoding); // We do not support conversion to YUV422 for now, except from YUV422 if ((src_encoding == YUV422) && (dst_encoding != YUV422)) { EXPECT_THROW(cvtColor(cv_image, src_encoding)->image, cv_bridge::Exception); continue; } } // And convert back to a cv::Mat image_back = cvtColor(cv_image, src_encoding)->image; // If the number of channels,s different some information got lost at some point, so no possible test if (!is_num_channels_the_same) continue; if (bitDepth(src_encoding) >= 32) { // In the case where the input has floats, we will lose precision but no more than 1 EXPECT_LT(cv::norm(image_original, image_back, cv::NORM_INF), 1) << "problem converting from " << src_encoding << " to " << dst_encoding << " and back."; } else if ((bitDepth(src_encoding) == 16) && (bitDepth(dst_encoding) == 8)) { // In the case where the input has floats, we will lose precision but no more than 1 * max(127) EXPECT_LT(cv::norm(image_original, image_back, cv::NORM_INF), 128) << "problem converting from " << src_encoding << " to " << dst_encoding << " and back."; } else { EXPECT_EQ(cv::norm(image_original, image_back, cv::NORM_INF), 0) << "problem converting from " << src_encoding << " to " << dst_encoding << " and back."; } } } } vision_opencv-1.12.3/image_geometry/000077500000000000000000000000001302104371500174505ustar00rootroot00000000000000vision_opencv-1.12.3/image_geometry/CHANGELOG.rst000066400000000000000000000302341302104371500214730ustar00rootroot00000000000000^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Changelog for package image_geometry ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1.12.3 (2016-12-04) ------------------- 1.12.2 (2016-09-24) ------------------- * Fix "stdlib.h: No such file or directory" errors in GCC-6 Including '-isystem /usr/include' breaks building with GCC-6. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129 * Merge pull request `#142 `_ from YuOhara/remap_with_nan_border_value remap with nan border if mat value is float or double * remap with nan border if mat value is float or double * Contributors: Hodorgasm, Vincent Rabaud, YuOhara 1.12.1 (2016-07-11) ------------------- * Add fullResolution getter to PinholeCameraModel * add a missing dependency when building the doc * fix sphinx doc path * Contributors: Jacob Panikulam, Vincent Rabaud 1.12.0 (2016-03-18) ------------------- * depend on OpenCV3 only * Contributors: Vincent Rabaud 1.11.12 (2016-03-10) -------------------- * issue `#117 `_ pull request `#118 `_ check all distortion coefficients to see if rectification ought to be done * Contributors: Lucas Walter 1.11.11 (2016-01-31) -------------------- * clean up the doc files * fix a few warnings in doc jobs * Contributors: Vincent Rabaud 1.11.10 (2016-01-16) -------------------- 1.11.9 (2015-11-29) ------------------- * add a condition if D=None * fix compilation warnings * Contributors: Vincent Rabaud, YuOhara 1.11.8 (2015-07-15) ------------------- * fixes `#62 `_, bug in Python rectifyPoint old opencv1 API * Simplify some OpenCV3 distinction * Contributors: Basheer Subei, Vincent Rabaud 1.11.7 (2014-12-14) ------------------- * Merge pull request `#53 `_ from carnegieroboticsllc/patch-1 Update stereo_camera_model.cpp * Updated inline math for reprojecting a single disparity * Update stereo_camera_model.cpp Correct slight error in the Q matrix derivation * Updated Q matrix to account for cameras with different Fx and Fy values * Contributors: Carnegie Robotics LLC, Matt Alvarado, Vincent Rabaud 1.11.6 (2014-11-16) ------------------- * Fixes in image_geometry for Python cv2 API * Fixed typo: np -> numpy * Added missing tfFrame method to Python PinholeCameraModel. * Removed trailing whitespace. * Contributors: Daniel Maturana 1.11.5 (2014-09-21) ------------------- * get code to work with OpenCV3 actually fixes `#46 `_ properly * Contributors: Vincent Rabaud 1.11.4 (2014-07-27) ------------------- 1.11.3 (2014-06-08) ------------------- * pinhole_camera_model: fix implicit shared_ptr cast to bool for C++11 In C++11 boost::shared_ptr does not provide the implicit bool conversion operator anymore, so make the cast in pinhole_camera_model.h explicit. That does not hurt in older C++ standards and makes compilation with C++11 possible. * Contributors: Max Schwarz 1.11.2 (2014-04-28) ------------------- 1.11.1 (2014-04-16) ------------------- 1.11.0 (2014-02-15) ------------------- * remove OpenCV 1 API * fixes `#6 `_ * fix OpenCV dependencies * Contributors: Vincent Rabaud 1.10.15 (2014-02-07) -------------------- * add assignment operator for StereoCameraModel * fixed Python rectifyImage implementation in PinholeCameraModel * Added operator= for the PinholeCameraModel. Added the operator= for the PinholeCameraModel. I am not sure if the PinholeCameraModel needs to have a destructor, too. To follow the 'rule of three' it should actually have one. * Contributors: Tobias Bär, Valsamis Ntouskos, Vincent Rabaud 1.10.14 (2013-11-23 16:17) -------------------------- * Contributors: Vincent Rabaud 1.10.13 (2013-11-23 09:19) -------------------------- * Contributors: Vincent Rabaud 1.10.12 (2013-11-22) -------------------- * "1.10.12" * Contributors: Vincent Rabaud 1.10.11 (2013-10-23) -------------------- * Contributors: Vincent Rabaud 1.10.10 (2013-10-19) -------------------- * Contributors: Vincent Rabaud 1.10.9 (2013-10-07) ------------------- * fixes `#23 `_ * Contributors: Vincent Rabaud 1.10.8 (2013-09-09) ------------------- * check for CATKIN_ENABLE_TESTING * update email address * Contributors: Lukas Bulwahn, Vincent Rabaud 1.10.7 (2013-07-17) ------------------- 1.10.6 (2013-03-01) ------------------- 1.10.5 (2013-02-11) ------------------- * Add dependency on generated messages Catkin requires explicit enumeration of dependencies on generated messages. Add this declaration to properly flatten the dependency graph and force Catkin to generate geometry_msgs before compiling image_geometry. * Contributors: Adam Hachey 1.10.4 (2013-02-02) ------------------- 1.10.3 (2013-01-17) ------------------- 1.10.2 (2013-01-13) ------------------- * fix ticket 4253 * Contributors: Vincent Rabaud 1.10.1 (2013-01-10) ------------------- 1.10.0 (2013-01-03) ------------------- 1.9.15 (2013-01-02) ------------------- 1.9.14 (2012-12-30) ------------------- * add feature for https://code.ros.org/trac/ros-pkg/ticket/5592 * CMake cleanups * fix a failing test * Contributors: Vincent Rabaud 1.9.13 (2012-12-15) ------------------- * use the catkin macros for the setup.py * Contributors: Vincent Rabaud 1.9.12 (2012-12-14) ------------------- * buildtool_depend catkin fix * Contributors: William Woodall 1.9.11 (2012-12-10) ------------------- * Fixing image_geometry package.xml * fix https://code.ros.org/trac/ros-pkg/ticket/5570 * Contributors: Vincent Rabaud, William Woodall 1.9.10 (2012-10-04) ------------------- 1.9.9 (2012-10-01) ------------------ * fix dependencies * Contributors: Vincent Rabaud 1.9.8 (2012-09-30) ------------------ * fix some dependencies * fix missing Python at install and fix some dependencies * Contributors: Vincent Rabaud 1.9.7 (2012-09-28 21:07) ------------------------ * add missing stuff * make sure we find catkin * Contributors: Vincent Rabaud 1.9.6 (2012-09-28 15:17) ------------------------ * make all the tests pass * comply to the new Catkin API * Contributors: Vincent Rabaud 1.9.5 (2012-09-15) ------------------ * remove dependencies to the opencv2 ROS package * Contributors: Vincent Rabaud 1.9.4 (2012-09-13) ------------------ * make sure the include folders are copied to the right place * Contributors: Vincent Rabaud 1.9.3 (2012-09-12) ------------------ 1.9.2 (2012-09-07) ------------------ * be more compliant to the latest catkin * added catkin_project() to cv_bridge, image_geometry, and opencv_tests * Contributors: Jonathan Binney, Vincent Rabaud 1.9.1 (2012-08-28 22:06) ------------------------ * remove things that were marked as ROS_DEPRECATED * Contributors: Vincent Rabaud 1.9.0 (2012-08-28 14:29) ------------------------ * catkinized opencv_tests by Jon Binney * fix ticket 5449 * use OpenCV's projectPoints * remove the version check, let's trust OpenCV :) * revert the removal of opencv2 * vision_opencv: Export OpenCV flags in manifests for image_geometry, cv_bridge. * finally get rid of opencv2 as it is a system dependency now * bump REQUIRED version of OpenCV to 2.3.2, which is what's in ros-fuerte-opencv * switch rosdep name to opencv2, to refer to ros-fuerte-opencv2 * Adding a few missing headers so that client code may compile against pinhole camera model. * Adding opencv2 to all manifests, so that client packages may not break when using them. * baking in opencv debs and attempting a pre-release * image_geometry: (Python) Adjust K and P for ROI/binning. Also expose full resolution K and P. Add raw_roi property. * image_geometry: Add Tx, Ty getters (Python). * image_geometry: Added tf_frame and stamp properties. Only generate undistort maps when rectifyImage is called. * image_geometry: Fix for when D is empty (Python). * image_geometry: Take all D coefficients, not just the first 4 (Python). * image_geometry: Fix rectification in the presence of binning (`#4848 `_). * image_geometry: Fixed wg-ros-pkg `#5019 `_, error updating StereoCameraModel. Removed manifest dependency on cv_bridge. * image_geometry: fromCameraInfo() returns bool, true if parameters have changed since last call. * image_geometry: Accessors for full-res K, P. * image_geometry: Implemented toFullResolution(), toReducedResolution(). * image_geometry: Implemented reducedResolution(). * image_geometry: Implemented rectifiedRoi() with caching. Fixed bug that would cause rectification maps to be regenerated every time. * image_geometry: Implemented rectifyRoi(). * image_geometry: Overloads of projection functions that return the output directly instead of through a reference parameter. Implemented unrectifyRoi(). Added fullResolution(), rawRoi(). * image_geometry: Library-specific exception class. * image_geometry: PIMPL pattern for cached data, so I can change in patch releases if necessary. Changed projectPixelTo3dRay() to normalize to z=1. * image_geometry (rep0104): Added binning. Partially fixed ROI (not finding rectified ROI yet). Now interpreting distortion_model. Lots of code cleanup. * image_geometry (rep0104): Got tests passing again, were issues with D becoming variable-length. * image_geometry: Fixed swapped width/height in computing ROI undistort maps. Partially fixes `#4206 `_. * image_geometry: getDelta functions, getZ and getDisparity in C++ and Python. Docs and tests for them. Changed Python fx() and friends to pull values out of P instead of K. * image_geometry: Added C++ getDeltaU and getDeltaV. * `#4201 `_, implement/doc/test for getDeltaU getDeltaX getDeltaV getDeltaY * Added Ubuntu platform tags to manifest * `#4083 `_, projectPixelTo3dRay implemented * image_geometry: Added PinholeCameraModel::stamp() returning the time stamp. * image_geometry: Fixed bugs related to ignoring Tx & Ty in projectPixelTo3dRay and unrectifyPoint. Added Tx() and Ty() accessors. * image_geometry: Fixed `#4063 `_, PinholeCameraModel ignores Tx term in P matrix. * image_geometry: Implemented projectDisparityTo3d, `#4019 `_. * image_geometry: Implemented unrectifyPoint, with unit tests. * image_geometry: Fixed bug in rectifyPoint due to cv::undistortPoints not accepting double pt data, `#4053 `_. * image_geometry: Tweaked manifest. * image_geometry: Better manifest description. * Removed tfFrame sample * image_geometry: Doxygen main page, manifest updates. * image_geometry: Doxygen for StereoCameraModel. * image_geometry: Made Q calculation match old stereoproc one. * image_geometry: Tweaked projectDisparityImageTo3D API for handling missing values. * image_geometry: Added method to project disparity image to 3d. Added ConstPtr version of fromCameraInfo in StereoCameraModel. * image_geometry: Export linker flags. Fixed bug that could cause rectification maps to not be initialized before use. * Fixed path-handling on gtest for CMake 2.6 * image_geometry: Added missing source file. * image_geometry: Added some C++ docs. * image_geometry: Minor cleanup of StereoCameraModel, added it to build. Put in copy constructors. * image_geometry: Switched pinhole_camera_model to use new C++ OpenCV types and functions. * Remove use of deprecated rosbuild macros * image_geometry (C++): Unit test for projecting points uv <-> xyz. * image_geometry (C++): Implemented more projection functions, added beginnings of the unit tests. * trigger rebuild * Enable rosdoc.yaml * docs * image_geometry: Started C++ API. PinholeCameraModel is in theory (untested) able to track state efficiently and rectify images. * First stereo test * Checkpoint * Skeleton of test * First cut * Contributors: Vincent Rabaud, ethanrublee, gerkey, jamesb, mihelich, vrabaud, wheeler vision_opencv-1.12.3/image_geometry/CMakeLists.txt000066400000000000000000000020361302104371500222110ustar00rootroot00000000000000cmake_minimum_required(VERSION 2.8) project(image_geometry) find_package(catkin REQUIRED sensor_msgs) find_package(OpenCV REQUIRED) catkin_package(CATKIN_DEPENDS sensor_msgs DEPENDS OpenCV INCLUDE_DIRS include LIBRARIES ${PROJECT_NAME} ) catkin_python_setup() include_directories(${catkin_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ) include_directories(include) # add a library add_library(${PROJECT_NAME} src/pinhole_camera_model.cpp src/stereo_camera_model.cpp) target_link_libraries(${PROJECT_NAME} ${OpenCV_LIBRARIES}) add_dependencies(${PROJECT_NAME} ${catkin_EXPORTED_TARGETS}) install(DIRECTORY include/${PROJECT_NAME}/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}/ ) install(TARGETS ${PROJECT_NAME} RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} ) # add tests if(CATKIN_ENABLE_TESTING) add_subdirectory(test) endif() vision_opencv-1.12.3/image_geometry/doc/000077500000000000000000000000001302104371500202155ustar00rootroot00000000000000vision_opencv-1.12.3/image_geometry/doc/conf.py000066400000000000000000000150231302104371500215150ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # image_geometry documentation build configuration file, created by # sphinx-quickstart on Mon Jun 1 14:21:53 2009. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx', 'sphinx.ext.pngmath'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'image_geometry' copyright = u'2009, Willow Garage, Inc.' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '0.1' # The full version, including alpha/beta/rc tags. release = '0.1.0' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'image_geometrydoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'image_geometry.tex', u'stereo\\_utils Documentation', u'James Bowman', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = { 'http://docs.python.org/': None, 'http://docs.scipy.org/doc/numpy' : None, 'http://www.ros.org/doc/api/tf/html/python/' : None } vision_opencv-1.12.3/image_geometry/doc/index.rst000066400000000000000000000010501302104371500220520ustar00rootroot00000000000000image_geometry ============== image_geometry simplifies interpreting images geometrically using the parameters from sensor_msgs/CameraInfo. .. module:: image_geometry .. autoclass:: image_geometry.PinholeCameraModel :members: fromCameraInfo, rectifyImage, rectifyPoint, tfFrame, project3dToPixel, projectPixelTo3dRay, distortionCoeffs, intrinsicMatrix, projectionMatrix, rotationMatrix, cx, cy, fx, fy .. autoclass:: image_geometry.StereoCameraModel :members: Indices and tables ================== * :ref:`genindex` * :ref:`search` vision_opencv-1.12.3/image_geometry/doc/mainpage.dox000066400000000000000000000014231302104371500225120ustar00rootroot00000000000000/** \mainpage \htmlinclude manifest.html \b image_geometry contains camera model classes that simplify interpreting images geometrically using the calibration parameters from sensor_msgs/CameraInfo messages. They may be efficiently updated in your image callback: \code image_geometry::PinholeCameraModel model_; void imageCb(const sensor_msgs::ImageConstPtr& raw_image, const sensor_msgs::CameraInfoConstPtr& cam_info) { // Update the camera model (usually a no-op) model_.fromCameraInfo(cam_info); // Do processing... } \endcode \section codeapi Code API \b image_geometry contains two classes: - image_geometry::PinholeCameraModel - models a pinhole camera with distortion. - image_geometry::StereoCameraModel - models a stereo pair of pinhole cameras. */ vision_opencv-1.12.3/image_geometry/include/000077500000000000000000000000001302104371500210735ustar00rootroot00000000000000vision_opencv-1.12.3/image_geometry/include/image_geometry/000077500000000000000000000000001302104371500240705ustar00rootroot00000000000000vision_opencv-1.12.3/image_geometry/include/image_geometry/pinhole_camera_model.h000066400000000000000000000230761302104371500303770ustar00rootroot00000000000000#ifndef IMAGE_GEOMETRY_PINHOLE_CAMERA_MODEL_H #define IMAGE_GEOMETRY_PINHOLE_CAMERA_MODEL_H #include #include #include #include #include namespace image_geometry { class Exception : public std::runtime_error { public: Exception(const std::string& description) : std::runtime_error(description) {} }; /** * \brief Simplifies interpreting images geometrically using the parameters from * sensor_msgs/CameraInfo. */ class PinholeCameraModel { public: PinholeCameraModel(); PinholeCameraModel(const PinholeCameraModel& other); PinholeCameraModel& operator=(const PinholeCameraModel& other); /** * \brief Set the camera parameters from the sensor_msgs/CameraInfo message. */ bool fromCameraInfo(const sensor_msgs::CameraInfo& msg); /** * \brief Set the camera parameters from the sensor_msgs/CameraInfo message. */ bool fromCameraInfo(const sensor_msgs::CameraInfoConstPtr& msg); /** * \brief Get the name of the camera coordinate frame in tf. */ std::string tfFrame() const; /** * \brief Get the time stamp associated with this camera model. */ ros::Time stamp() const; /** * \brief The resolution at which the camera was calibrated. * * The maximum resolution at which the camera can be used with the current * calibration; normally this is the same as the imager resolution. */ cv::Size fullResolution() const; /** * \brief The resolution of the current rectified image. * * The size of the rectified image associated with the latest CameraInfo, as * reduced by binning/ROI and affected by distortion. If binning and ROI are * not in use, this is the same as fullResolution(). */ cv::Size reducedResolution() const; cv::Point2d toFullResolution(const cv::Point2d& uv_reduced) const; cv::Rect toFullResolution(const cv::Rect& roi_reduced) const; cv::Point2d toReducedResolution(const cv::Point2d& uv_full) const; cv::Rect toReducedResolution(const cv::Rect& roi_full) const; /** * \brief The current raw ROI, as used for capture by the camera driver. */ cv::Rect rawRoi() const; /** * \brief The current rectified ROI, which best fits the raw ROI. */ cv::Rect rectifiedRoi() const; /** * \brief Project a 3d point to rectified pixel coordinates. * * This is the inverse of projectPixelTo3dRay(). * * \param xyz 3d point in the camera coordinate frame * \return (u,v) in rectified pixel coordinates */ cv::Point2d project3dToPixel(const cv::Point3d& xyz) const; /** * \brief Project a rectified pixel to a 3d ray. * * Returns the unit vector in the camera coordinate frame in the direction of rectified * pixel (u,v) in the image plane. This is the inverse of project3dToPixel(). * * In 1.4.x, the vector has z = 1.0. Previously, this function returned a unit vector. * * \param uv_rect Rectified pixel coordinates * \return 3d ray passing through (u,v) */ cv::Point3d projectPixelTo3dRay(const cv::Point2d& uv_rect) const; /** * \brief Rectify a raw camera image. */ void rectifyImage(const cv::Mat& raw, cv::Mat& rectified, int interpolation = cv::INTER_LINEAR) const; /** * \brief Apply camera distortion to a rectified image. */ void unrectifyImage(const cv::Mat& rectified, cv::Mat& raw, int interpolation = cv::INTER_LINEAR) const; /** * \brief Compute the rectified image coordinates of a pixel in the raw image. */ cv::Point2d rectifyPoint(const cv::Point2d& uv_raw) const; /** * \brief Compute the raw image coordinates of a pixel in the rectified image. */ cv::Point2d unrectifyPoint(const cv::Point2d& uv_rect) const; /** * \brief Compute the rectified ROI best fitting a raw ROI. */ cv::Rect rectifyRoi(const cv::Rect& roi_raw) const; /** * \brief Compute the raw ROI best fitting a rectified ROI. */ cv::Rect unrectifyRoi(const cv::Rect& roi_rect) const; /** * \brief Returns the camera info message held internally */ const sensor_msgs::CameraInfo& cameraInfo() const; /** * \brief Returns the original camera matrix. */ const cv::Matx33d& intrinsicMatrix() const; /** * \brief Returns the distortion coefficients. */ const cv::Mat_& distortionCoeffs() const; /** * \brief Returns the rotation matrix. */ const cv::Matx33d& rotationMatrix() const; /** * \brief Returns the projection matrix. */ const cv::Matx34d& projectionMatrix() const; /** * \brief Returns the original camera matrix for full resolution. */ const cv::Matx33d& fullIntrinsicMatrix() const; /** * \brief Returns the projection matrix for full resolution. */ const cv::Matx34d& fullProjectionMatrix() const; /** * \brief Returns the focal length (pixels) in x direction of the rectified image. */ double fx() const; /** * \brief Returns the focal length (pixels) in y direction of the rectified image. */ double fy() const; /** * \brief Returns the x coordinate of the optical center. */ double cx() const; /** * \brief Returns the y coordinate of the optical center. */ double cy() const; /** * \brief Returns the x-translation term of the projection matrix. */ double Tx() const; /** * \brief Returns the y-translation term of the projection matrix. */ double Ty() const; /** * \brief Returns the number of columns in each bin. */ uint32_t binningX() const; /** * \brief Returns the number of rows in each bin. */ uint32_t binningY() const; /** * \brief Compute delta u, given Z and delta X in Cartesian space. * * For given Z, this is the inverse of getDeltaX(). * * \param deltaX Delta X, in Cartesian space * \param Z Z (depth), in Cartesian space */ double getDeltaU(double deltaX, double Z) const; /** * \brief Compute delta v, given Z and delta Y in Cartesian space. * * For given Z, this is the inverse of getDeltaY(). * * \param deltaY Delta Y, in Cartesian space * \param Z Z (depth), in Cartesian space */ double getDeltaV(double deltaY, double Z) const; /** * \brief Compute delta X, given Z in Cartesian space and delta u in pixels. * * For given Z, this is the inverse of getDeltaU(). * * \param deltaU Delta u, in pixels * \param Z Z (depth), in Cartesian space */ double getDeltaX(double deltaU, double Z) const; /** * \brief Compute delta Y, given Z in Cartesian space and delta v in pixels. * * For given Z, this is the inverse of getDeltaV(). * * \param deltaV Delta v, in pixels * \param Z Z (depth), in Cartesian space */ double getDeltaY(double deltaV, double Z) const; /** * \brief Returns true if the camera has been initialized */ bool initialized() const { return (bool)cache_; } protected: sensor_msgs::CameraInfo cam_info_; cv::Mat_ D_; // Unaffected by binning, ROI cv::Matx33d R_; // Unaffected by binning, ROI cv::Matx33d K_; // Describe current image (includes binning, ROI) cv::Matx34d P_; // Describe current image (includes binning, ROI) cv::Matx33d K_full_; // Describe full-res image, needed for full maps cv::Matx34d P_full_; // Describe full-res image, needed for full maps // Use PIMPL here so we can change internals in patch updates if needed struct Cache; boost::shared_ptr cache_; // Holds cached data for internal use void initRectificationMaps() const; friend class StereoCameraModel; }; /* Trivial inline functions */ inline std::string PinholeCameraModel::tfFrame() const { assert( initialized() ); return cam_info_.header.frame_id; } inline ros::Time PinholeCameraModel::stamp() const { assert( initialized() ); return cam_info_.header.stamp; } inline const sensor_msgs::CameraInfo& PinholeCameraModel::cameraInfo() const { return cam_info_; } inline const cv::Matx33d& PinholeCameraModel::intrinsicMatrix() const { return K_; } inline const cv::Mat_& PinholeCameraModel::distortionCoeffs() const { return D_; } inline const cv::Matx33d& PinholeCameraModel::rotationMatrix() const { return R_; } inline const cv::Matx34d& PinholeCameraModel::projectionMatrix() const { return P_; } inline const cv::Matx33d& PinholeCameraModel::fullIntrinsicMatrix() const { return K_full_; } inline const cv::Matx34d& PinholeCameraModel::fullProjectionMatrix() const { return P_full_; } inline double PinholeCameraModel::fx() const { return P_(0,0); } inline double PinholeCameraModel::fy() const { return P_(1,1); } inline double PinholeCameraModel::cx() const { return P_(0,2); } inline double PinholeCameraModel::cy() const { return P_(1,2); } inline double PinholeCameraModel::Tx() const { return P_(0,3); } inline double PinholeCameraModel::Ty() const { return P_(1,3); } inline uint32_t PinholeCameraModel::binningX() const { return cam_info_.binning_x; } inline uint32_t PinholeCameraModel::binningY() const { return cam_info_.binning_y; } inline double PinholeCameraModel::getDeltaU(double deltaX, double Z) const { assert( initialized() ); return fx() * deltaX / Z; } inline double PinholeCameraModel::getDeltaV(double deltaY, double Z) const { assert( initialized() ); return fy() * deltaY / Z; } inline double PinholeCameraModel::getDeltaX(double deltaU, double Z) const { assert( initialized() ); return Z * deltaU / fx(); } inline double PinholeCameraModel::getDeltaY(double deltaV, double Z) const { assert( initialized() ); return Z * deltaV / fy(); } } //namespace image_geometry #endif vision_opencv-1.12.3/image_geometry/include/image_geometry/stereo_camera_model.h000066400000000000000000000070441302104371500302370ustar00rootroot00000000000000#ifndef IMAGE_GEOMETRY_STEREO_CAMERA_MODEL_H #define IMAGE_GEOMETRY_STEREO_CAMERA_MODEL_H #include "image_geometry/pinhole_camera_model.h" namespace image_geometry { /** * \brief Simplifies interpreting stereo image pairs geometrically using the * parameters from the left and right sensor_msgs/CameraInfo. */ class StereoCameraModel { public: StereoCameraModel(); StereoCameraModel(const StereoCameraModel& other); StereoCameraModel& operator=(const StereoCameraModel& other); /** * \brief Set the camera parameters from the sensor_msgs/CameraInfo messages. */ bool fromCameraInfo(const sensor_msgs::CameraInfo& left, const sensor_msgs::CameraInfo& right); /** * \brief Set the camera parameters from the sensor_msgs/CameraInfo messages. */ bool fromCameraInfo(const sensor_msgs::CameraInfoConstPtr& left, const sensor_msgs::CameraInfoConstPtr& right); /** * \brief Get the left monocular camera model. */ const PinholeCameraModel& left() const; /** * \brief Get the right monocular camera model. */ const PinholeCameraModel& right() const; /** * \brief Get the name of the camera coordinate frame in tf. * * For stereo cameras, both the left and right CameraInfo should be in the left * optical frame. */ std::string tfFrame() const; /** * \brief Project a rectified pixel with disparity to a 3d point. */ void projectDisparityTo3d(const cv::Point2d& left_uv_rect, float disparity, cv::Point3d& xyz) const; /** * \brief Project a disparity image to a 3d point cloud. * * If handleMissingValues = true, all points with minimal disparity (outliers) have * Z set to MISSING_Z (currently 10000.0). */ void projectDisparityImageTo3d(const cv::Mat& disparity, cv::Mat& point_cloud, bool handleMissingValues = false) const; static const double MISSING_Z; /** * \brief Returns the disparity reprojection matrix. */ const cv::Matx44d& reprojectionMatrix() const; /** * \brief Returns the horizontal baseline in world coordinates. */ double baseline() const; /** * \brief Returns the depth at which a point is observed with a given disparity. * * This is the inverse of getDisparity(). */ double getZ(double disparity) const; /** * \brief Returns the disparity observed for a point at depth Z. * * This is the inverse of getZ(). */ double getDisparity(double Z) const; /** * \brief Returns true if the camera has been initialized */ bool initialized() const { return left_.initialized() && right_.initialized(); } protected: PinholeCameraModel left_, right_; cv::Matx44d Q_; void updateQ(); }; /* Trivial inline functions */ inline const PinholeCameraModel& StereoCameraModel::left() const { return left_; } inline const PinholeCameraModel& StereoCameraModel::right() const { return right_; } inline std::string StereoCameraModel::tfFrame() const { return left_.tfFrame(); } inline const cv::Matx44d& StereoCameraModel::reprojectionMatrix() const { return Q_; } inline double StereoCameraModel::baseline() const { /// @todo Currently assuming horizontal baseline return -right_.Tx() / right_.fx(); } inline double StereoCameraModel::getZ(double disparity) const { assert( initialized() ); return -right_.Tx() / (disparity - (left().cx() - right().cx())); } inline double StereoCameraModel::getDisparity(double Z) const { assert( initialized() ); return -right_.Tx() / Z + (left().cx() - right().cx()); ; } } //namespace image_geometry #endif vision_opencv-1.12.3/image_geometry/package.xml000066400000000000000000000017571302104371500215770ustar00rootroot00000000000000 image_geometry 1.12.3 `image_geometry` contains C++ and Python libraries for interpreting images geometrically. It interfaces the calibration parameters in sensor_msgs/CameraInfo messages with OpenCV functions such as image rectification, much as cv_bridge interfaces ROS sensor_msgs/Image with OpenCV data types. Patrick Mihelich Vincent Rabaud BSD http://www.ros.org/wiki/image_geometry catkin opencv3 sensor_msgs opencv3 sensor_msgs dvipng texlive-latex-extra vision_opencv-1.12.3/image_geometry/rosdoc.yaml000066400000000000000000000002771302104371500216330ustar00rootroot00000000000000 - builder: sphinx name: Python API output_dir: python sphinx_root_dir: doc - builder: doxygen name: C++ API output_dir: c++ file_patterns: '*.c *.cpp *.h *.cc *.hh *.dox' vision_opencv-1.12.3/image_geometry/setup.py000066400000000000000000000003441302104371500211630ustar00rootroot00000000000000#!/usr/bin/env python from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup d = generate_distutils_setup() d['packages'] = ['image_geometry'] d['package_dir'] = {'' : 'src'} setup(**d) vision_opencv-1.12.3/image_geometry/src/000077500000000000000000000000001302104371500202375ustar00rootroot00000000000000vision_opencv-1.12.3/image_geometry/src/image_geometry/000077500000000000000000000000001302104371500232345ustar00rootroot00000000000000vision_opencv-1.12.3/image_geometry/src/image_geometry/__init__.py000066400000000000000000000000771302104371500253510ustar00rootroot00000000000000from cameramodels import PinholeCameraModel, StereoCameraModel vision_opencv-1.12.3/image_geometry/src/image_geometry/cameramodels.py000066400000000000000000000275421302104371500262540ustar00rootroot00000000000000import array import cv2 import sensor_msgs.msg import math import copy import numpy def mkmat(rows, cols, L): mat = numpy.matrix(L, dtype='float64') mat.resize((rows,cols)) return mat class PinholeCameraModel: """ A pinhole camera is an idealized monocular camera. """ def __init__(self): self.K = None self.D = None self.R = None self.P = None self.full_K = None self.full_P = None self.width = None self.height = None self.binning_x = None self.binning_y = None self.raw_roi = None self.tf_frame = None self.stamp = None def fromCameraInfo(self, msg): """ :param msg: camera parameters :type msg: sensor_msgs.msg.CameraInfo Set the camera parameters from the :class:`sensor_msgs.msg.CameraInfo` message. """ self.K = mkmat(3, 3, msg.K) if msg.D: self.D = mkmat(len(msg.D), 1, msg.D) else: self.D = None self.R = mkmat(3, 3, msg.R) self.P = mkmat(3, 4, msg.P) self.full_K = mkmat(3, 3, msg.K) self.full_P = mkmat(3, 4, msg.P) self.width = msg.width self.height = msg.height self.binning_x = max(1, msg.binning_x) self.binning_y = max(1, msg.binning_y) self.resolution = (msg.width, msg.height) self.raw_roi = copy.copy(msg.roi) # ROI all zeros is considered the same as full resolution if (self.raw_roi.x_offset == 0 and self.raw_roi.y_offset == 0 and self.raw_roi.width == 0 and self.raw_roi.height == 0): self.raw_roi.width = self.width self.raw_roi.height = self.height self.tf_frame = msg.header.frame_id self.stamp = msg.header.stamp # Adjust K and P for binning and ROI self.K[0,0] /= self.binning_x self.K[1,1] /= self.binning_y self.K[0,2] = (self.K[0,2] - self.raw_roi.x_offset) / self.binning_x self.K[1,2] = (self.K[1,2] - self.raw_roi.y_offset) / self.binning_y self.P[0,0] /= self.binning_x self.P[1,1] /= self.binning_y self.P[0,2] = (self.P[0,2] - self.raw_roi.x_offset) / self.binning_x self.P[1,2] = (self.P[1,2] - self.raw_roi.y_offset) / self.binning_y def rectifyImage(self, raw, rectified): """ :param raw: input image :type raw: :class:`CvMat` or :class:`IplImage` :param rectified: rectified output image :type rectified: :class:`CvMat` or :class:`IplImage` Applies the rectification specified by camera parameters :math:`K` and and :math:`D` to image `raw` and writes the resulting image `rectified`. """ self.mapx = numpy.ndarray(shape=(self.height, self.width, 1), dtype='float32') self.mapy = numpy.ndarray(shape=(self.height, self.width, 1), dtype='float32') cv2.initUndistortRectifyMap(self.K, self.D, self.R, self.P, (self.width, self.height), cv2.CV_32FC1, self.mapx, self.mapy) cv2.remap(raw, self.mapx, self.mapy, cv2.INTER_CUBIC, rectified) def rectifyPoint(self, uv_raw): """ :param uv_raw: pixel coordinates :type uv_raw: (u, v) Applies the rectification specified by camera parameters :math:`K` and and :math:`D` to point (u, v) and returns the pixel coordinates of the rectified point. """ src = mkmat(1, 2, list(uv_raw)) src.resize((1,1,2)) dst = cv2.undistortPoints(src, self.K, self.D, R=self.R, P=self.P) return dst[0,0] def project3dToPixel(self, point): """ :param point: 3D point :type point: (x, y, z) Returns the rectified pixel coordinates (u, v) of the 3D point, using the camera :math:`P` matrix. This is the inverse of :meth:`projectPixelTo3dRay`. """ src = mkmat(4, 1, [point[0], point[1], point[2], 1.0]) dst = self.P * src x = dst[0,0] y = dst[1,0] w = dst[2,0] if w != 0: return (x / w, y / w) else: return (float('nan'), float('nan')) def projectPixelTo3dRay(self, uv): """ :param uv: rectified pixel coordinates :type uv: (u, v) Returns the unit vector which passes from the camera center to through rectified pixel (u, v), using the camera :math:`P` matrix. This is the inverse of :meth:`project3dToPixel`. """ x = (uv[0] - self.cx()) / self.fx() y = (uv[1] - self.cy()) / self.fy() norm = math.sqrt(x*x + y*y + 1) x /= norm y /= norm z = 1.0 / norm return (x, y, z) def getDeltaU(self, deltaX, Z): """ :param deltaX: delta X, in cartesian space :type deltaX: float :param Z: Z, in cartesian space :type Z: float :rtype: float Compute delta u, given Z and delta X in Cartesian space. For given Z, this is the inverse of :meth:`getDeltaX`. """ fx = self.P[0, 0] if Z == 0: return float('inf') else: return fx * deltaX / Z def getDeltaV(self, deltaY, Z): """ :param deltaY: delta Y, in cartesian space :type deltaY: float :param Z: Z, in cartesian space :type Z: float :rtype: float Compute delta v, given Z and delta Y in Cartesian space. For given Z, this is the inverse of :meth:`getDeltaY`. """ fy = self.P[1, 1] if Z == 0: return float('inf') else: return fy * deltaY / Z def getDeltaX(self, deltaU, Z): """ :param deltaU: delta u in pixels :type deltaU: float :param Z: Z, in cartesian space :type Z: float :rtype: float Compute delta X, given Z in cartesian space and delta u in pixels. For given Z, this is the inverse of :meth:`getDeltaU`. """ fx = self.P[0, 0] return Z * deltaU / fx def getDeltaY(self, deltaV, Z): """ :param deltaV: delta v in pixels :type deltaV: float :param Z: Z, in cartesian space :type Z: float :rtype: float Compute delta Y, given Z in cartesian space and delta v in pixels. For given Z, this is the inverse of :meth:`getDeltaV`. """ fy = self.P[1, 1] return Z * deltaV / fy def fullResolution(self): """Returns the full resolution of the camera""" return self.resolution def intrinsicMatrix(self): """ Returns :math:`K`, also called camera_matrix in cv docs """ return self.K def distortionCoeffs(self): """ Returns :math:`D` """ return self.D def rotationMatrix(self): """ Returns :math:`R` """ return self.R def projectionMatrix(self): """ Returns :math:`P` """ return self.P def fullIntrinsicMatrix(self): """ Return the original camera matrix for full resolution """ return self.full_K def fullProjectionMatrix(self): """ Return the projection matrix for full resolution """ return self.full_P def cx(self): """ Returns x center """ return self.P[0,2] def cy(self): """ Returns y center """ return self.P[1,2] def fx(self): """ Returns x focal length """ return self.P[0,0] def fy(self): """ Returns y focal length """ return self.P[1,1] def Tx(self): """ Return the x-translation term of the projection matrix """ return self.P[0,3] def Ty(self): """ Return the y-translation term of the projection matrix """ return self.P[1,3] def tfFrame(self): """ Returns the tf frame name - a string - of the camera. This is the frame of the :class:`sensor_msgs.msg.CameraInfo` message. """ return self.tf_frame class StereoCameraModel: """ An idealized stereo camera. """ def __init__(self): self.left = PinholeCameraModel() self.right = PinholeCameraModel() def fromCameraInfo(self, left_msg, right_msg): """ :param left_msg: left camera parameters :type left_msg: sensor_msgs.msg.CameraInfo :param right_msg: right camera parameters :type right_msg: sensor_msgs.msg.CameraInfo Set the camera parameters from the :class:`sensor_msgs.msg.CameraInfo` messages. """ self.left.fromCameraInfo(left_msg) self.right.fromCameraInfo(right_msg) # [ Fx, 0, Cx, Fx*-Tx ] # [ 0, Fy, Cy, 0 ] # [ 0, 0, 1, 0 ] fx = self.right.P[0, 0] fy = self.right.P[1, 1] cx = self.right.P[0, 2] cy = self.right.P[1, 2] tx = -self.right.P[0, 3] / fx # Q is: # [ 1, 0, 0, -Clx ] # [ 0, 1, 0, -Cy ] # [ 0, 0, 0, Fx ] # [ 0, 0, 1 / Tx, (Crx-Clx)/Tx ] self.Q = numpy.zeros((4, 4), dtype='float64') self.Q[0, 0] = 1.0 self.Q[0, 3] = -cx self.Q[1, 1] = 1.0 self.Q[1, 3] = -cy self.Q[2, 3] = fx self.Q[3, 2] = 1 / tx def tfFrame(self): """ Returns the tf frame name - a string - of the 3d points. This is the frame of the :class:`sensor_msgs.msg.CameraInfo` message. It may be used as a source frame in :class:`tf.TransformListener`. """ return self.left.tfFrame() def project3dToPixel(self, point): """ :param point: 3D point :type point: (x, y, z) Returns the rectified pixel coordinates (u, v) of the 3D point, for each camera, as ((u_left, v_left), (u_right, v_right)) using the cameras' :math:`P` matrices. This is the inverse of :meth:`projectPixelTo3d`. """ l = self.left.project3dToPixel(point) r = self.right.project3dToPixel(point) return (l, r) def projectPixelTo3d(self, left_uv, disparity): """ :param left_uv: rectified pixel coordinates :type left_uv: (u, v) :param disparity: disparity, in pixels :type disparity: float Returns the 3D point (x, y, z) for the given pixel position, using the cameras' :math:`P` matrices. This is the inverse of :meth:`project3dToPixel`. Note that a disparity of zero implies that the 3D point is at infinity. """ src = mkmat(4, 1, [left_uv[0], left_uv[1], disparity, 1.0]) dst = self.Q * src x = dst[0, 0] y = dst[1, 0] z = dst[2, 0] w = dst[3, 0] if w != 0: return (x / w, y / w, z / w) else: return (0.0, 0.0, 0.0) def getZ(self, disparity): """ :param disparity: disparity, in pixels :type disparity: float Returns the depth at which a point is observed with a given disparity. This is the inverse of :meth:`getDisparity`. Note that a disparity of zero implies Z is infinite. """ if disparity == 0: return float('inf') Tx = -self.right.P[0, 3] return Tx / disparity def getDisparity(self, Z): """ :param Z: Z (depth), in cartesian space :type Z: float Returns the disparity observed for a point at depth Z. This is the inverse of :meth:`getZ`. """ if Z == 0: return float('inf') Tx = -self.right.P[0, 3] return Tx / Z vision_opencv-1.12.3/image_geometry/src/pinhole_camera_model.cpp000066400000000000000000000367121302104371500251020ustar00rootroot00000000000000#include "image_geometry/pinhole_camera_model.h" #include #include namespace image_geometry { enum DistortionState { NONE, CALIBRATED, UNKNOWN }; struct PinholeCameraModel::Cache { DistortionState distortion_state; cv::Mat_ K_binned, P_binned; // Binning applied, but not cropping mutable bool full_maps_dirty; mutable cv::Mat full_map1, full_map2; mutable bool reduced_maps_dirty; mutable cv::Mat reduced_map1, reduced_map2; mutable bool rectified_roi_dirty; mutable cv::Rect rectified_roi; Cache() : full_maps_dirty(true), reduced_maps_dirty(true), rectified_roi_dirty(true) { } }; PinholeCameraModel::PinholeCameraModel() { } PinholeCameraModel& PinholeCameraModel::operator=(const PinholeCameraModel& other) { if (other.initialized()) this->fromCameraInfo(other.cameraInfo()); return *this; } PinholeCameraModel::PinholeCameraModel(const PinholeCameraModel& other) { if (other.initialized()) fromCameraInfo(other.cam_info_); } // For uint32_t, string, bool... template bool update(const T& new_val, T& my_val) { if (my_val == new_val) return false; my_val = new_val; return true; } // For boost::array, std::vector template bool updateMat(const MatT& new_mat, MatT& my_mat, cv::Mat_& cv_mat, int rows, int cols) { if ((my_mat == new_mat) && (my_mat.size() == cv_mat.rows*cv_mat.cols)) return false; my_mat = new_mat; // D may be empty if camera is uncalibrated or distortion model is non-standard cv_mat = (my_mat.size() == 0) ? cv::Mat_() : cv::Mat_(rows, cols, &my_mat[0]); return true; } template bool updateMat(const MatT& new_mat, MatT& my_mat, MatU& cv_mat) { if ((my_mat == new_mat) && (my_mat.size() == cv_mat.rows*cv_mat.cols)) return false; my_mat = new_mat; // D may be empty if camera is uncalibrated or distortion model is non-standard cv_mat = MatU(&my_mat[0]); return true; } bool PinholeCameraModel::fromCameraInfo(const sensor_msgs::CameraInfo& msg) { // Create our repository of cached data (rectification maps, etc.) if (!cache_) cache_ = boost::make_shared(); // Binning = 0 is considered the same as binning = 1 (no binning). uint32_t binning_x = msg.binning_x ? msg.binning_x : 1; uint32_t binning_y = msg.binning_y ? msg.binning_y : 1; // ROI all zeros is considered the same as full resolution. sensor_msgs::RegionOfInterest roi = msg.roi; if (roi.x_offset == 0 && roi.y_offset == 0 && roi.width == 0 && roi.height == 0) { roi.width = msg.width; roi.height = msg.height; } // Update time stamp (and frame_id if that changes for some reason) cam_info_.header = msg.header; // Update any parameters that have changed. The full rectification maps are // invalidated by any change in the calibration parameters OR binning. bool &full_dirty = cache_->full_maps_dirty; full_dirty |= update(msg.height, cam_info_.height); full_dirty |= update(msg.width, cam_info_.width); full_dirty |= update(msg.distortion_model, cam_info_.distortion_model); full_dirty |= updateMat(msg.D, cam_info_.D, D_, 1, msg.D.size()); full_dirty |= updateMat(msg.K, cam_info_.K, K_full_); full_dirty |= updateMat(msg.R, cam_info_.R, R_); full_dirty |= updateMat(msg.P, cam_info_.P, P_full_); full_dirty |= update(binning_x, cam_info_.binning_x); full_dirty |= update(binning_y, cam_info_.binning_y); // The reduced rectification maps are invalidated by any of the above or a // change in ROI. bool &reduced_dirty = cache_->reduced_maps_dirty; reduced_dirty = full_dirty; reduced_dirty |= update(roi.x_offset, cam_info_.roi.x_offset); reduced_dirty |= update(roi.y_offset, cam_info_.roi.y_offset); reduced_dirty |= update(roi.height, cam_info_.roi.height); reduced_dirty |= update(roi.width, cam_info_.roi.width); reduced_dirty |= update(roi.do_rectify, cam_info_.roi.do_rectify); // As is the rectified ROI cache_->rectified_roi_dirty = reduced_dirty; // Figure out how to handle the distortion if (cam_info_.distortion_model == sensor_msgs::distortion_models::PLUMB_BOB || cam_info_.distortion_model == sensor_msgs::distortion_models::RATIONAL_POLYNOMIAL) { // If any distortion coefficient is non-zero, then need to apply the distortion cache_->distortion_state = NONE; for (size_t i = 0; i < cam_info_.D.size(); ++i) { if (cam_info_.D[i] != 0) { cache_->distortion_state = CALIBRATED; break; } } } else cache_->distortion_state = UNKNOWN; // If necessary, create new K_ and P_ adjusted for binning and ROI /// @todo Calculate and use rectified ROI bool adjust_binning = (binning_x > 1) || (binning_y > 1); bool adjust_roi = (roi.x_offset != 0) || (roi.y_offset != 0); if (!adjust_binning && !adjust_roi) { K_ = K_full_; P_ = P_full_; } else { K_ = K_full_; P_ = P_full_; // ROI is in full image coordinates, so change it first if (adjust_roi) { // Move principal point by the offset /// @todo Adjust P by rectified ROI instead K_(0,2) -= roi.x_offset; K_(1,2) -= roi.y_offset; P_(0,2) -= roi.x_offset; P_(1,2) -= roi.y_offset; } if (binning_x > 1) { double scale_x = 1.0 / binning_x; K_(0,0) *= scale_x; K_(0,2) *= scale_x; P_(0,0) *= scale_x; P_(0,2) *= scale_x; P_(0,3) *= scale_x; } if (binning_y > 1) { double scale_y = 1.0 / binning_y; K_(1,1) *= scale_y; K_(1,2) *= scale_y; P_(1,1) *= scale_y; P_(1,2) *= scale_y; P_(1,3) *= scale_y; } } return reduced_dirty; } bool PinholeCameraModel::fromCameraInfo(const sensor_msgs::CameraInfoConstPtr& msg) { return fromCameraInfo(*msg); } cv::Size PinholeCameraModel::fullResolution() const { assert( initialized() ); return cv::Size(cam_info_.width, cam_info_.height); } cv::Size PinholeCameraModel::reducedResolution() const { assert( initialized() ); cv::Rect roi = rectifiedRoi(); return cv::Size(roi.width / binningX(), roi.height / binningY()); } cv::Point2d PinholeCameraModel::toFullResolution(const cv::Point2d& uv_reduced) const { cv::Rect roi = rectifiedRoi(); return cv::Point2d(uv_reduced.x * binningX() + roi.x, uv_reduced.y * binningY() + roi.y); } cv::Rect PinholeCameraModel::toFullResolution(const cv::Rect& roi_reduced) const { cv::Rect roi = rectifiedRoi(); return cv::Rect(roi_reduced.x * binningX() + roi.x, roi_reduced.y * binningY() + roi.y, roi_reduced.width * binningX(), roi_reduced.height * binningY()); } cv::Point2d PinholeCameraModel::toReducedResolution(const cv::Point2d& uv_full) const { cv::Rect roi = rectifiedRoi(); return cv::Point2d((uv_full.x - roi.x) / binningX(), (uv_full.y - roi.y) / binningY()); } cv::Rect PinholeCameraModel::toReducedResolution(const cv::Rect& roi_full) const { cv::Rect roi = rectifiedRoi(); return cv::Rect((roi_full.x - roi.x) / binningX(), (roi_full.y - roi.y) / binningY(), roi_full.width / binningX(), roi_full.height / binningY()); } cv::Rect PinholeCameraModel::rawRoi() const { assert( initialized() ); return cv::Rect(cam_info_.roi.x_offset, cam_info_.roi.y_offset, cam_info_.roi.width, cam_info_.roi.height); } cv::Rect PinholeCameraModel::rectifiedRoi() const { assert( initialized() ); if (cache_->rectified_roi_dirty) { if (!cam_info_.roi.do_rectify) cache_->rectified_roi = rawRoi(); else cache_->rectified_roi = rectifyRoi(rawRoi()); cache_->rectified_roi_dirty = false; } return cache_->rectified_roi; } cv::Point2d PinholeCameraModel::project3dToPixel(const cv::Point3d& xyz) const { assert( initialized() ); assert(P_(2, 3) == 0.0); // Calibrated stereo cameras should be in the same plane // [U V W]^T = P * [X Y Z 1]^T // u = U/W // v = V/W cv::Point2d uv_rect; uv_rect.x = (fx()*xyz.x + Tx()) / xyz.z + cx(); uv_rect.y = (fy()*xyz.y + Ty()) / xyz.z + cy(); return uv_rect; } cv::Point3d PinholeCameraModel::projectPixelTo3dRay(const cv::Point2d& uv_rect) const { assert( initialized() ); cv::Point3d ray; ray.x = (uv_rect.x - cx() - Tx()) / fx(); ray.y = (uv_rect.y - cy() - Ty()) / fy(); ray.z = 1.0; return ray; } void PinholeCameraModel::rectifyImage(const cv::Mat& raw, cv::Mat& rectified, int interpolation) const { assert( initialized() ); switch (cache_->distortion_state) { case NONE: raw.copyTo(rectified); break; case CALIBRATED: initRectificationMaps(); if (raw.depth() == CV_32F || raw.depth() == CV_64F) { cv::remap(raw, rectified, cache_->reduced_map1, cache_->reduced_map2, interpolation, cv::BORDER_CONSTANT, std::numeric_limits::quiet_NaN()); } else { cv::remap(raw, rectified, cache_->reduced_map1, cache_->reduced_map2, interpolation); } break; default: assert(cache_->distortion_state == UNKNOWN); throw Exception("Cannot call rectifyImage when distortion is unknown."); } } void PinholeCameraModel::unrectifyImage(const cv::Mat& rectified, cv::Mat& raw, int interpolation) const { assert( initialized() ); throw Exception("PinholeCameraModel::unrectifyImage is unimplemented."); /// @todo Implement unrectifyImage() // Similar to rectifyImage, but need to build separate set of inverse maps (raw->rectified)... // - Build src_pt Mat with all the raw pixel coordinates (or do it one row at a time) // - Do cv::undistortPoints(src_pt, dst_pt, K_, D_, R_, P_) // - Use convertMaps() to convert dst_pt to fast fixed-point maps // - cv::remap(rectified, raw, ...) // Need interpolation argument. Same caching behavior? } cv::Point2d PinholeCameraModel::rectifyPoint(const cv::Point2d& uv_raw) const { assert( initialized() ); if (cache_->distortion_state == NONE) return uv_raw; if (cache_->distortion_state == UNKNOWN) throw Exception("Cannot call rectifyPoint when distortion is unknown."); assert(cache_->distortion_state == CALIBRATED); /// @todo cv::undistortPoints requires the point data to be float, should allow double cv::Point2f raw32 = uv_raw, rect32; const cv::Mat src_pt(1, 1, CV_32FC2, &raw32.x); cv::Mat dst_pt(1, 1, CV_32FC2, &rect32.x); cv::undistortPoints(src_pt, dst_pt, K_, D_, R_, P_); return rect32; } cv::Point2d PinholeCameraModel::unrectifyPoint(const cv::Point2d& uv_rect) const { assert( initialized() ); if (cache_->distortion_state == NONE) return uv_rect; if (cache_->distortion_state == UNKNOWN) throw Exception("Cannot call unrectifyPoint when distortion is unknown."); assert(cache_->distortion_state == CALIBRATED); // Convert to a ray cv::Point3d ray = projectPixelTo3dRay(uv_rect); // Project the ray on the image cv::Mat r_vec, t_vec = cv::Mat_::zeros(3, 1); cv::Rodrigues(R_.t(), r_vec); std::vector image_point; cv::projectPoints(std::vector(1, ray), r_vec, t_vec, K_, D_, image_point); return image_point[0]; } cv::Rect PinholeCameraModel::rectifyRoi(const cv::Rect& roi_raw) const { assert( initialized() ); /// @todo Actually implement "best fit" as described by REP 104. // For now, just unrectify the four corners and take the bounding box. cv::Point2d rect_tl = rectifyPoint(cv::Point2d(roi_raw.x, roi_raw.y)); cv::Point2d rect_tr = rectifyPoint(cv::Point2d(roi_raw.x + roi_raw.width, roi_raw.y)); cv::Point2d rect_br = rectifyPoint(cv::Point2d(roi_raw.x + roi_raw.width, roi_raw.y + roi_raw.height)); cv::Point2d rect_bl = rectifyPoint(cv::Point2d(roi_raw.x, roi_raw.y + roi_raw.height)); cv::Point roi_tl(std::ceil (std::min(rect_tl.x, rect_bl.x)), std::ceil (std::min(rect_tl.y, rect_tr.y))); cv::Point roi_br(std::floor(std::max(rect_tr.x, rect_br.x)), std::floor(std::max(rect_bl.y, rect_br.y))); return cv::Rect(roi_tl.x, roi_tl.y, roi_br.x - roi_tl.x, roi_br.y - roi_tl.y); } cv::Rect PinholeCameraModel::unrectifyRoi(const cv::Rect& roi_rect) const { assert( initialized() ); /// @todo Actually implement "best fit" as described by REP 104. // For now, just unrectify the four corners and take the bounding box. cv::Point2d raw_tl = unrectifyPoint(cv::Point2d(roi_rect.x, roi_rect.y)); cv::Point2d raw_tr = unrectifyPoint(cv::Point2d(roi_rect.x + roi_rect.width, roi_rect.y)); cv::Point2d raw_br = unrectifyPoint(cv::Point2d(roi_rect.x + roi_rect.width, roi_rect.y + roi_rect.height)); cv::Point2d raw_bl = unrectifyPoint(cv::Point2d(roi_rect.x, roi_rect.y + roi_rect.height)); cv::Point roi_tl(std::floor(std::min(raw_tl.x, raw_bl.x)), std::floor(std::min(raw_tl.y, raw_tr.y))); cv::Point roi_br(std::ceil (std::max(raw_tr.x, raw_br.x)), std::ceil (std::max(raw_bl.y, raw_br.y))); return cv::Rect(roi_tl.x, roi_tl.y, roi_br.x - roi_tl.x, roi_br.y - roi_tl.y); } void PinholeCameraModel::initRectificationMaps() const { /// @todo For large binning settings, can drop extra rows/cols at bottom/right boundary. /// Make sure we're handling that 100% correctly. if (cache_->full_maps_dirty) { // Create the full-size map at the binned resolution /// @todo Should binned resolution, K, P be part of public API? cv::Size binned_resolution = fullResolution(); binned_resolution.width /= binningX(); binned_resolution.height /= binningY(); cv::Matx33d K_binned; cv::Matx34d P_binned; if (binningX() == 1 && binningY() == 1) { K_binned = K_full_; P_binned = P_full_; } else { K_binned = K_full_; P_binned = P_full_; if (binningX() > 1) { double scale_x = 1.0 / binningX(); K_binned(0,0) *= scale_x; K_binned(0,2) *= scale_x; P_binned(0,0) *= scale_x; P_binned(0,2) *= scale_x; P_binned(0,3) *= scale_x; } if (binningY() > 1) { double scale_y = 1.0 / binningY(); K_binned(1,1) *= scale_y; K_binned(1,2) *= scale_y; P_binned(1,1) *= scale_y; P_binned(1,2) *= scale_y; P_binned(1,3) *= scale_y; } } // Note: m1type=CV_16SC2 to use fast fixed-point maps (see cv::remap) cv::initUndistortRectifyMap(K_binned, D_, R_, P_binned, binned_resolution, CV_16SC2, cache_->full_map1, cache_->full_map2); cache_->full_maps_dirty = false; } if (cache_->reduced_maps_dirty) { /// @todo Use rectified ROI cv::Rect roi(cam_info_.roi.x_offset, cam_info_.roi.y_offset, cam_info_.roi.width, cam_info_.roi.height); if (roi.x != 0 || roi.y != 0 || roi.height != (int)cam_info_.height || roi.width != (int)cam_info_.width) { // map1 contains integer (x,y) offsets, which we adjust by the ROI offset // map2 contains LUT index for subpixel interpolation, which we can leave as-is roi.x /= binningX(); roi.y /= binningY(); roi.width /= binningX(); roi.height /= binningY(); cache_->reduced_map1 = cache_->full_map1(roi) - cv::Scalar(roi.x, roi.y); cache_->reduced_map2 = cache_->full_map2(roi); } else { // Otherwise we're rectifying the full image cache_->reduced_map1 = cache_->full_map1; cache_->reduced_map2 = cache_->full_map2; } cache_->reduced_maps_dirty = false; } } } //namespace image_geometry vision_opencv-1.12.3/image_geometry/src/stereo_camera_model.cpp000066400000000000000000000103151302104371500247340ustar00rootroot00000000000000#include "image_geometry/stereo_camera_model.h" namespace image_geometry { StereoCameraModel::StereoCameraModel() : Q_(0.0) { Q_(0,0) = Q_(1,1) = 1.0; } StereoCameraModel::StereoCameraModel(const StereoCameraModel& other) : left_(other.left_), right_(other.right_), Q_(0.0) { Q_(0,0) = Q_(1,1) = 1.0; if (other.initialized()) updateQ(); } StereoCameraModel& StereoCameraModel::operator=(const StereoCameraModel& other) { if (other.initialized()) this->fromCameraInfo(other.left_.cameraInfo(), other.right_.cameraInfo()); return *this; } bool StereoCameraModel::fromCameraInfo(const sensor_msgs::CameraInfo& left, const sensor_msgs::CameraInfo& right) { bool changed_left = left_.fromCameraInfo(left); bool changed_right = right_.fromCameraInfo(right); bool changed = changed_left || changed_right; // Note: don't require identical time stamps to allow imperfectly synced stereo. assert( left_.tfFrame() == right_.tfFrame() ); assert( left_.fx() == right_.fx() ); assert( left_.fy() == right_.fy() ); assert( left_.cy() == right_.cy() ); // cx may differ for verged cameras if (changed) updateQ(); return changed; } bool StereoCameraModel::fromCameraInfo(const sensor_msgs::CameraInfoConstPtr& left, const sensor_msgs::CameraInfoConstPtr& right) { return fromCameraInfo(*left, *right); } void StereoCameraModel::updateQ() { // Update variable fields of reprojection matrix /* From Springer Handbook of Robotics, p. 524: [ Fx 0 Cx 0 ] P = [ 0 Fy Cy 0 ] [ 0 0 1 0 ] [ Fx 0 Cx' FxTx ] P' = [ 0 Fy Cy 0 ] [ 0 0 1 0 ] where primed parameters are from the left projection matrix, unprimed from the right. [u v 1]^T = P * [x y z 1]^T [u-d v 1]^T = P' * [x y z 1]^T Combining the two equations above results in the following equation [u v u-d 1]^T = [ Fx 0 Cx 0 ] * [ x y z 1]^T [ 0 Fy Cy 0 ] [ Fx 0 Cx' FxTx ] [ 0 0 1 0 ] Subtracting the 3rd from from the first and inverting the expression results in the following equation. [x y z 1]^T = Q * [u v d 1]^T Where Q is defined as Q = [ FyTx 0 0 -FyCxTx ] [ 0 FxTx 0 -FxCyTx ] [ 0 0 0 FxFyTx ] [ 0 0 -Fy Fy(Cx-Cx') ] Using the assumption Fx = Fy Q can be simplified to the following. But for compatibility with stereo cameras with different focal lengths we will use the full Q matrix. [ 1 0 0 -Cx ] Q = [ 0 1 0 -Cy ] [ 0 0 0 Fx ] [ 0 0 -1/Tx (Cx-Cx')/Tx ] Disparity = x_left - x_right For compatibility with stereo cameras with different focal lengths we will use the full Q matrix. */ double Tx = -baseline(); // The baseline member negates our Tx. Undo this negation Q_(0,0) = left_.fy() * Tx; Q_(0,3) = -left_.fy() * left_.cx() * Tx; Q_(1,1) = left_.fx() * Tx; Q_(1,3) = -left_.fx() * left_.cy() * Tx; Q_(2,3) = left_.fx() * left_.fy() * Tx; Q_(3,2) = -left_.fy(); Q_(3,3) = left_.fy() * (left_.cx() - right_.cx()); // zero when disparities are pre-adjusted } void StereoCameraModel::projectDisparityTo3d(const cv::Point2d& left_uv_rect, float disparity, cv::Point3d& xyz) const { assert( initialized() ); // Do the math inline: // [X Y Z W]^T = Q * [u v d 1]^T // Point = (X/W, Y/W, Z/W) // cv::perspectiveTransform could be used but with more overhead. double u = left_uv_rect.x, v = left_uv_rect.y; cv::Point3d XYZ( (Q_(0,0) * u) + Q_(0,3), (Q_(1,1) * v) + Q_(1,3), Q_(2,3)); double W = Q_(3,2)*disparity + Q_(3,3); xyz = XYZ * (1.0/W); } const double StereoCameraModel::MISSING_Z = 10000.; void StereoCameraModel::projectDisparityImageTo3d(const cv::Mat& disparity, cv::Mat& point_cloud, bool handleMissingValues) const { assert( initialized() ); cv::reprojectImageTo3D(disparity, point_cloud, Q_, handleMissingValues); } } //namespace image_geometry vision_opencv-1.12.3/image_geometry/test/000077500000000000000000000000001302104371500204275ustar00rootroot00000000000000vision_opencv-1.12.3/image_geometry/test/CMakeLists.txt000066400000000000000000000002411302104371500231640ustar00rootroot00000000000000catkin_add_nosetests(directed.py) catkin_add_gtest(${PROJECT_NAME}-utest utest.cpp) target_link_libraries(${PROJECT_NAME}-utest ${PROJECT_NAME} ${OpenCV_LIBS}) vision_opencv-1.12.3/image_geometry/test/directed.py000066400000000000000000000065671302104371500226020ustar00rootroot00000000000000import rostest import rospy import unittest import sensor_msgs.msg from image_geometry import PinholeCameraModel, StereoCameraModel class TestDirected(unittest.TestCase): def setUp(self): pass def test_monocular(self): ci = sensor_msgs.msg.CameraInfo() ci.width = 640 ci.height = 480 print ci cam = PinholeCameraModel() cam.fromCameraInfo(ci) print cam.rectifyPoint((0, 0)) print cam.project3dToPixel((0,0,0)) def test_stereo(self): lmsg = sensor_msgs.msg.CameraInfo() rmsg = sensor_msgs.msg.CameraInfo() for m in (lmsg, rmsg): m.width = 640 m.height = 480 # These parameters taken from a real camera calibration lmsg.D = [-0.363528858080088, 0.16117037733986861, -8.1109585007538829e-05, -0.00044776712298447841, 0.0] lmsg.K = [430.15433020105519, 0.0, 311.71339830549732, 0.0, 430.60920415473657, 221.06824942698509, 0.0, 0.0, 1.0] lmsg.R = [0.99806560714807102, 0.0068562422224214027, 0.061790256276695904, -0.0067522959054715113, 0.99997541519165112, -0.0018909025066874664, -0.061801701660692349, 0.0014700186639396652, 0.99808736527268516] lmsg.P = [295.53402059708782, 0.0, 285.55760765075684, 0.0, 0.0, 295.53402059708782, 223.29617881774902, 0.0, 0.0, 0.0, 1.0, 0.0] rmsg.D = [-0.3560641041112021, 0.15647260261553159, -0.00016442960757099968, -0.00093175810713916221] rmsg.K = [428.38163131344191, 0.0, 327.95553847249192, 0.0, 428.85728580588329, 217.54828640915309, 0.0, 0.0, 1.0] rmsg.R = [0.9982082576219119, 0.0067433328293516528, 0.059454199832973849, -0.0068433268864187356, 0.99997549128605434, 0.0014784127772287513, -0.059442773257581252, -0.0018826283666309878, 0.99822993965212292] rmsg.P = [295.53402059708782, 0.0, 285.55760765075684, -26.507895206214123, 0.0, 295.53402059708782, 223.29617881774902, 0.0, 0.0, 0.0, 1.0, 0.0] cam = StereoCameraModel() cam.fromCameraInfo(lmsg, rmsg) for x in (16, 320, m.width - 16): for y in (16, 240, m.height - 16): for d in range(1, 10): pt3d = cam.projectPixelTo3d((x, y), d) ((lx, ly), (rx, ry)) = cam.project3dToPixel(pt3d) self.assertAlmostEqual(y, ly, 3) self.assertAlmostEqual(y, ry, 3) self.assertAlmostEqual(x, lx, 3) self.assertAlmostEqual(x, rx + d, 3) u = 100.0 v = 200.0 du = 17.0 dv = 23.0 Z = 2.0 xyz0 = cam.left.projectPixelTo3dRay((u, v)) xyz0 = (xyz0[0] * (Z / xyz0[2]), xyz0[1] * (Z / xyz0[2]), Z) xyz1 = cam.left.projectPixelTo3dRay((u + du, v + dv)) xyz1 = (xyz1[0] * (Z / xyz1[2]), xyz1[1] * (Z / xyz1[2]), Z) self.assertAlmostEqual(cam.left.getDeltaU(xyz1[0] - xyz0[0], Z), du, 3) self.assertAlmostEqual(cam.left.getDeltaV(xyz1[1] - xyz0[1], Z), dv, 3) self.assertAlmostEqual(cam.left.getDeltaX(du, Z), xyz1[0] - xyz0[0], 3) self.assertAlmostEqual(cam.left.getDeltaY(dv, Z), xyz1[1] - xyz0[1], 3) if __name__ == '__main__': if 1: rostest.unitrun('image_geometry', 'directed', TestDirected) else: suite = unittest.TestSuite() suite.addTest(TestDirected('test_stereo')) unittest.TextTestRunner(verbosity=2).run(suite) vision_opencv-1.12.3/image_geometry/test/utest.cpp000066400000000000000000000226441302104371500223070ustar00rootroot00000000000000#include "image_geometry/pinhole_camera_model.h" #include #include /// @todo Tests with simple values (R = identity, D = 0, P = K or simple scaling) /// @todo Test projection functions for right stereo values, P(:,3) != 0 /// @todo Tests for [un]rectifyImage /// @todo Tests using ROI, needs support from PinholeCameraModel /// @todo Tests for StereoCameraModel class PinholeTest : public testing::Test { protected: virtual void SetUp() { /// @todo Just load these from file // These parameters taken from a real camera calibration double D[] = {-0.363528858080088, 0.16117037733986861, -8.1109585007538829e-05, -0.00044776712298447841, 0.0}; double K[] = {430.15433020105519, 0.0, 311.71339830549732, 0.0, 430.60920415473657, 221.06824942698509, 0.0, 0.0, 1.0}; double R[] = {0.99806560714807102, 0.0068562422224214027, 0.061790256276695904, -0.0067522959054715113, 0.99997541519165112, -0.0018909025066874664, -0.061801701660692349, 0.0014700186639396652, 0.99808736527268516}; double P[] = {295.53402059708782, 0.0, 285.55760765075684, 0.0, 0.0, 295.53402059708782, 223.29617881774902, 0.0, 0.0, 0.0, 1.0, 0.0}; cam_info_.header.frame_id = "tf_frame"; cam_info_.height = 480; cam_info_.width = 640; // No ROI cam_info_.D.resize(5); std::copy(D, D+5, cam_info_.D.begin()); std::copy(K, K+9, cam_info_.K.begin()); std::copy(R, R+9, cam_info_.R.begin()); std::copy(P, P+12, cam_info_.P.begin()); cam_info_.distortion_model = sensor_msgs::distortion_models::PLUMB_BOB; model_.fromCameraInfo(cam_info_); } sensor_msgs::CameraInfo cam_info_; image_geometry::PinholeCameraModel model_; }; TEST_F(PinholeTest, accessorsCorrect) { EXPECT_STREQ("tf_frame", model_.tfFrame().c_str()); EXPECT_EQ(cam_info_.P[0], model_.fx()); EXPECT_EQ(cam_info_.P[5], model_.fy()); EXPECT_EQ(cam_info_.P[2], model_.cx()); EXPECT_EQ(cam_info_.P[6], model_.cy()); } TEST_F(PinholeTest, projectPoint) { // Spot test an arbitrary point. { cv::Point2d uv(100, 100); cv::Point3d xyz = model_.projectPixelTo3dRay(uv); EXPECT_NEAR(-0.62787224048135637, xyz.x, 1e-8); EXPECT_NEAR(-0.41719792045817677, xyz.y, 1e-8); EXPECT_DOUBLE_EQ(1.0, xyz.z); } // Principal point should project straight out. { cv::Point2d uv(model_.cx(), model_.cy()); cv::Point3d xyz = model_.projectPixelTo3dRay(uv); EXPECT_DOUBLE_EQ(0.0, xyz.x); EXPECT_DOUBLE_EQ(0.0, xyz.y); EXPECT_DOUBLE_EQ(1.0, xyz.z); } // Check projecting to 3d and back over entire image is accurate. const size_t step = 10; for (size_t row = 0; row <= cam_info_.height; row += step) { for (size_t col = 0; col <= cam_info_.width; col += step) { cv::Point2d uv(row, col), uv_back; cv::Point3d xyz = model_.projectPixelTo3dRay(uv); uv_back = model_.project3dToPixel(xyz); // Measured max error at 1.13687e-13 EXPECT_NEAR(uv.x, uv_back.x, 1.14e-13) << "at (" << row << ", " << col << ")"; EXPECT_NEAR(uv.y, uv_back.y, 1.14e-13) << "at (" << row << ", " << col << ")"; } } } TEST_F(PinholeTest, rectifyPoint) { // Spot test an arbitrary point. { cv::Point2d uv_raw(100, 100), uv_rect; uv_rect = model_.rectifyPoint(uv_raw); EXPECT_DOUBLE_EQ(142.30311584472656, uv_rect.x); EXPECT_DOUBLE_EQ(132.061065673828, uv_rect.y); } /// @todo Need R = identity for the principal point tests. #if 0 // Test rectifyPoint takes (c'x, c'y) [from K] -> (cx, cy) [from P]. double cxp = model_.intrinsicMatrix()(0,2), cyp = model_.intrinsicMatrix()(1,2); { cv::Point2d uv_raw(cxp, cyp), uv_rect; model_.rectifyPoint(uv_raw, uv_rect); EXPECT_NEAR(uv_rect.x, model_.cx(), 1e-4); EXPECT_NEAR(uv_rect.y, model_.cy(), 1e-4); } // Test unrectifyPoint takes (cx, cy) [from P] -> (c'x, c'y) [from K]. { cv::Point2d uv_rect(model_.cx(), model_.cy()), uv_raw; model_.unrectifyPoint(uv_rect, uv_raw); EXPECT_NEAR(uv_raw.x, cxp, 1e-4); EXPECT_NEAR(uv_raw.y, cyp, 1e-4); } #endif // Check rectifying then unrectifying over most of the image is accurate. const size_t step = 5; const size_t border = 65; // Expect bad accuracy far from the center of the image. for (size_t row = border; row <= cam_info_.height - border; row += step) { for (size_t col = border; col <= cam_info_.width - border; col += step) { cv::Point2d uv_raw(row, col), uv_rect, uv_unrect; uv_rect = model_.rectifyPoint(uv_raw); uv_unrect = model_.unrectifyPoint(uv_rect); // Check that we're at least within a pixel... EXPECT_NEAR(uv_raw.x, uv_unrect.x, 1.0); EXPECT_NEAR(uv_raw.y, uv_unrect.y, 1.0); } } } TEST_F(PinholeTest, getDeltas) { double u = 100.0, v = 200.0, du = 17.0, dv = 23.0, Z = 2.0; cv::Point2d uv0(u, v), uv1(u + du, v + dv); cv::Point3d xyz0, xyz1; xyz0 = model_.projectPixelTo3dRay(uv0); xyz0 *= (Z / xyz0.z); xyz1 = model_.projectPixelTo3dRay(uv1); xyz1 *= (Z / xyz1.z); EXPECT_NEAR(model_.getDeltaU(xyz1.x - xyz0.x, Z), du, 1e-4); EXPECT_NEAR(model_.getDeltaV(xyz1.y - xyz0.y, Z), dv, 1e-4); EXPECT_NEAR(model_.getDeltaX(du, Z), xyz1.x - xyz0.x, 1e-4); EXPECT_NEAR(model_.getDeltaY(dv, Z), xyz1.y - xyz0.y, 1e-4); } TEST_F(PinholeTest, initialization) { sensor_msgs::CameraInfo info; image_geometry::PinholeCameraModel camera; camera.fromCameraInfo(info); EXPECT_EQ(camera.initialized(), 1); EXPECT_EQ(camera.projectionMatrix().rows, 3); EXPECT_EQ(camera.projectionMatrix().cols, 4); } TEST_F(PinholeTest, rectifyIfCalibrated) { /// @todo use forward distortion for a better test // Ideally this test would have two images stored on disk // one which is distorted and the other which is rectified, // and then rectification would take place here and the output // image compared to the one on disk (which would mean if // the distortion coefficients above can't change once paired with // an image). // Later could incorporate distort code // (https://github.com/lucasw/vimjay/blob/master/src/standalone/distort_image.cpp) // to take any image distort it, then undistort with rectifyImage, // and given the distortion coefficients are consistent the input image // and final output image should be mostly the same (though some // interpolation error // creeps in), except for outside a masked region where information was lost. // The masked region can be generated with a pure white image that // goes through the same process (if it comes out completely black // then the distortion parameters are problematic). // For now generate an image and pass the test simply if // the rectified image does not match the distorted image. // Then zero out the first distortion coefficient and run // the test again. // Then zero out all the distortion coefficients and test // that the output image is the same as the input. cv::Mat distorted_image(cv::Size(cam_info_.width, cam_info_.height), CV_8UC3); // draw a grid const cv::Scalar color = cv::Scalar(255, 255, 255); // draw the lines thick so the proportion of error due to // interpolation is reduced const int thickness = 7; const int type = 8; for (size_t y = 0; y <= cam_info_.height; y += cam_info_.height/10) { cv::line(distorted_image, cv::Point(0, y), cv::Point(cam_info_.width, y), color, type, thickness); } for (size_t x = 0; x <= cam_info_.width; x += cam_info_.width/10) { // draw the lines thick so the prorportion of interpolation error is reduced cv::line(distorted_image, cv::Point(x, 0), cv::Point(x, cam_info_.height), color, type, thickness); } cv::Mat rectified_image; // Just making this number up, maybe ought to be larger // since a completely different image would be on the order of // width * height * 255 = 78e6 const double diff_threshold = 10000.0; double error; // Test that rectified image is sufficiently different // using default distortion model_.rectifyImage(distorted_image, rectified_image); error = cv::norm(distorted_image, rectified_image, cv::NORM_L1); // Just making this number up, maybe ought to be larger EXPECT_GT(error, diff_threshold); // Test that rectified image is sufficiently different // using default distortion but with first element zeroed // out. sensor_msgs::CameraInfo cam_info_2 = cam_info_; cam_info_2.D[0] = 0.0; model_.fromCameraInfo(cam_info_2); model_.rectifyImage(distorted_image, rectified_image); error = cv::norm(distorted_image, rectified_image, cv::NORM_L1); EXPECT_GT(error, diff_threshold); // Test that rectified image is the same using zero distortion cam_info_2.D.assign(cam_info_2.D.size(), 0); model_.fromCameraInfo(cam_info_2); model_.rectifyImage(distorted_image, rectified_image); error = cv::norm(distorted_image, rectified_image, cv::NORM_L1); EXPECT_EQ(error, 0); // Test that rectified image is the same using empty distortion cam_info_2.D.clear(); model_.fromCameraInfo(cam_info_2); model_.rectifyImage(distorted_image, rectified_image); error = cv::norm(distorted_image, rectified_image, cv::NORM_L1); EXPECT_EQ(error, 0); // restore original distortion model_.fromCameraInfo(cam_info_); } int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } vision_opencv-1.12.3/opencv_tests/000077500000000000000000000000001302104371500171675ustar00rootroot00000000000000vision_opencv-1.12.3/opencv_tests/CHANGELOG.rst000066400000000000000000000157351302104371500212230ustar00rootroot00000000000000^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Changelog for package opencv_tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1.12.3 (2016-12-04) ------------------- 1.12.2 (2016-09-24) ------------------- 1.12.1 (2016-07-11) ------------------- * Support compressed Images messages in python for indigo - Add cv2_to_comprssed_imgmsg: Convert from cv2 image to compressed image ros msg. - Add comprssed_imgmsg_to_cv2: Convert the compress message to a new image. - Add compressed image tests. - Add time to msgs (compressed and regular). add enumerants test for compressed image. merge the compressed tests with the regular ones. better comment explanation. I will squash this commit. Fix indentation fix typo mistage: from .imgmsg_to_compressed_cv2 to .compressed_imgmsg_to_cv2. remove cv2.CV_8UC1 remove rospy and time depndency. change from IMREAD_COLOR to IMREAD_ANYCOLOR. - make indentaion of 4. - remove space trailer. - remove space from empty lines. - another set of for loops, it will make things easier to track. In that new set, just have the number of channels in ([],1,3,4) (ignore two for jpg). from: https://github.com/ros-perception/vision_opencv/pull/132#discussion_r66721943 - keep the OpenCV error message. from: https://github.com/ros-perception/vision_opencv/pull/132#discussion_r66721013 add debug print for test. add case for 4 channels in test. remove 4 channels case from compressed test. add debug print for test. change typo of format. fix typo in format. change from dip to dib. change to IMREAD_ANYCOLOR as python code. (as it should). rename TIFF to tiff Sperate the tests one for regular images and one for compressed. update comment * Contributors: talregev 1.12.0 (2016-03-18) ------------------- 1.11.12 (2016-03-10) -------------------- 1.11.11 (2016-01-31) -------------------- * fix a few warnings in doc jobs * Contributors: Vincent Rabaud 1.11.10 (2016-01-16) -------------------- 1.11.9 (2015-11-29) ------------------- 1.11.8 (2015-07-15) ------------------- * simplify dependencies * Contributors: Vincent Rabaud 1.11.7 (2014-12-14) ------------------- 1.11.6 (2014-11-16) ------------------- 1.11.5 (2014-09-21) ------------------- 1.11.4 (2014-07-27) ------------------- 1.11.3 (2014-06-08) ------------------- * remove file whose functinality is now in cv_bridge * remove references to cv (use cv2) * Correct dependency from non-existent package to cv_bridge * Contributors: Isaac Isao Saito, Vincent Rabaud 1.11.2 (2014-04-28) ------------------- 1.11.1 (2014-04-16) ------------------- 1.11.0 (2014-02-15) ------------------- 1.10.15 (2014-02-07) -------------------- 1.10.14 (2013-11-23 16:17) -------------------------- * Contributors: Vincent Rabaud 1.10.13 (2013-11-23 09:19) -------------------------- * Contributors: Vincent Rabaud 1.10.12 (2013-11-22) -------------------- * Contributors: Vincent Rabaud 1.10.11 (2013-10-23) -------------------- * Contributors: Vincent Rabaud 1.10.10 (2013-10-19) -------------------- * Contributors: Vincent Rabaud 1.10.9 (2013-10-07) ------------------- * Contributors: Vincent Rabaud 1.10.8 (2013-09-09) ------------------- * update email address * Contributors: Vincent Rabaud 1.10.7 (2013-07-17) ------------------- 1.10.6 (2013-03-01) ------------------- 1.10.5 (2013-02-11) ------------------- 1.10.4 (2013-02-02) ------------------- 1.10.3 (2013-01-17) ------------------- 1.10.2 (2013-01-13) ------------------- 1.10.1 (2013-01-10) ------------------- * fixes `#5 `_ by removing the logic from Python and using wrapped C++ and adding a test for it * Contributors: Vincent Rabaud 1.10.0 (2013-01-03) ------------------- 1.9.15 (2013-01-02) ------------------- 1.9.14 (2012-12-30) ------------------- 1.9.13 (2012-12-15) ------------------- 1.9.12 (2012-12-14) ------------------- * Removed brief tag Conflicts: opencv_tests/package.xml * buildtool_depend catkin fix * Contributors: William Woodall 1.9.11 (2012-12-10) ------------------- 1.9.10 (2012-10-04) ------------------- 1.9.9 (2012-10-01) ------------------ 1.9.8 (2012-09-30) ------------------ 1.9.7 (2012-09-28 21:07) ------------------------ * add missing stuff * make sure we find catkin * Contributors: Vincent Rabaud 1.9.6 (2012-09-28 15:17) ------------------------ * move the test to where it belongs * fix the tests and the API to not handle conversion from CV_TYPE to Color type (does not make sense) * make all the tests pass * comply to the new Catkin API * backport the C++ test from Fuerte * Contributors: Vincent Rabaud 1.9.5 (2012-09-15) ------------------ * remove dependencies to the opencv2 ROS package * Contributors: Vincent Rabaud 1.9.4 (2012-09-13) ------------------ 1.9.3 (2012-09-12) ------------------ * update to nosetests * Contributors: Vincent Rabaud 1.9.2 (2012-09-07) ------------------ * be more compliant to the latest catkin * added catkin_project() to cv_bridge, image_geometry, and opencv_tests * Contributors: Jonathan Binney, Vincent Rabaud 1.9.1 (2012-08-28 22:06) ------------------------ * remove a deprecated header * Contributors: Vincent Rabaud 1.9.0 (2012-08-28 14:29) ------------------------ * cleanup by Jon Binney * catkinized opencv_tests by Jon Binney * remove the version check, let's trust OpenCV :) * revert the removal of opencv2 * finally get rid of opencv2 as it is a system dependency now * bump REQUIRED version of OpenCV to 2.3.2, which is what's in ros-fuerte-opencv * switch rosdep name to opencv2, to refer to ros-fuerte-opencv2 * Fixing link lines for gtest against opencv. * Adding opencv2 to all manifests, so that client packages may not break when using them. * baking in opencv debs and attempting a pre-release * Another hack for prerelease to quiet test failures. * Dissable a dubious opencv test. Temporary HACK. * Changing to expect for more verbose failure. * Minor change to test. * Making this depend on libopencv-2.3-dev debian available in ros-shadow. * mono16 -> bgr conversion tested and fixed in C * Added Ubuntu platform tags to manifest * Tuned for parc loop * Demo of ROS node face detecton * mono16 support, ticket `#2890 `_ * Remove use of deprecated rosbuild macros * cv_bridge split from opencv2 * Name changes for opencv -> vision_opencv * Validation for image message encoding * utest changed to reflect rosimgtocv change to imgmsgtocv * Add opencvpython as empty package * New methods for cv image conversion * Disabling tests on OSX, `#2769 `_ * New Python CvBridge, rewrote C CvBridge, regression test for C and Python CvBridge * Fix underscore problem, test 8UC3->BGR8, fix 8UC3->BGR8 * New image format * Image message and CvBridge change * Rename rows,cols to height,width in Image message * New node bbc for image testing * Make executable * Pong demo * Missing utest.cpp * New sensor_msgs::Image message * Contributors: Vincent Rabaud, ethanrublee, gerkey, jamesb, jamesbowman, pantofaru, vrabaud, wheeler vision_opencv-1.12.3/opencv_tests/CMakeLists.txt000066400000000000000000000001521302104371500217250ustar00rootroot00000000000000cmake_minimum_required(VERSION 2.8) project(opencv_tests) find_package(catkin REQUIRED) catkin_package() vision_opencv-1.12.3/opencv_tests/launch/000077500000000000000000000000001302104371500204415ustar00rootroot00000000000000vision_opencv-1.12.3/opencv_tests/launch/pong.launch000066400000000000000000000003331302104371500225770ustar00rootroot00000000000000 vision_opencv-1.12.3/opencv_tests/mainpage.dox000066400000000000000000000052731302104371500214730ustar00rootroot00000000000000/** \mainpage \htmlinclude manifest.html \b opencv_tests is ... \section codeapi Code API \section rosapi ROS API */vision_opencv-1.12.3/opencv_tests/nodes/000077500000000000000000000000001302104371500202775ustar00rootroot00000000000000vision_opencv-1.12.3/opencv_tests/nodes/broadcast.py000077500000000000000000000055461302104371500226300ustar00rootroot00000000000000#!/usr/bin/env python # Software License Agreement (BSD License) # # Copyright (c) 2008, Willow Garage, Inc. # Copyright (c) 2016, Tal Regev. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials provided # with the distribution. # * Neither the name of the Willow Garage nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. import sys import time import math import rospy import cv2 import sensor_msgs.msg from cv_bridge import CvBridge # Send each image by iterate it from given array of files names to a given topic, # as a regular and compressed ROS Images msgs. class Source: def __init__(self, topic, filenames): self.pub = rospy.Publisher(topic, sensor_msgs.msg.Image) self.pub_compressed = rospy.Publisher(topic + "/compressed", sensor_msgs.msg.CompressedImage) self.filenames = filenames def spin(self): time.sleep(1.0) cvb = CvBridge() while not rospy.core.is_shutdown(): cvim = cv2.imload(self.filenames[0]) self.pub.publish(cvb.cv2_to_imgmsg(cvim)) self.pub_compressed.publish(cvb.cv2_to_compressed_imgmsg(cvim)) self.filenames = self.filenames[1:] + [self.filenames[0]] time.sleep(1) def main(args): s = Source(args[1], args[2:]) rospy.init_node('Source') try: s.spin() rospy.spin() outcome = 'test completed' except KeyboardInterrupt: print "shutting down" outcome = 'keyboard interrupt' rospy.core.signal_shutdown(outcome) if __name__ == '__main__': main(sys.argv) vision_opencv-1.12.3/opencv_tests/nodes/rosfacedetect.py000077500000000000000000000104661302104371500234760ustar00rootroot00000000000000#!/usr/bin/python """ This program is demonstration for face and object detection using haar-like features. The program finds faces in a camera image or video stream and displays a red box around them. Original C implementation by: ? Python implementation by: Roman Stanchak, James Bowman Updated: Copyright (c) 2016, Tal Regev. """ import sys import os from optparse import OptionParser import rospy import sensor_msgs.msg from cv_bridge import CvBridge import cv2 import numpy # Parameters for haar detection # From the API: # The default parameters (scale_factor=2, min_neighbors=3, flags=0) are tuned # for accurate yet slow object detection. For a faster operation on real video # images the settings are: # scale_factor=1.2, min_neighbors=2, flags=CV_HAAR_DO_CANNY_PRUNING, # min_size= opencv_tests 1.12.3 Tests the enumerants of the ROS Image message, and functionally tests the Python and C++ implementations of CvBridge. James Bowman Vincent Rabaud BSD http://wiki.ros.org/opencv_tests catkin cv_bridge vision_opencv-1.12.3/vision_opencv/000077500000000000000000000000001302104371500173345ustar00rootroot00000000000000vision_opencv-1.12.3/vision_opencv/CHANGELOG.rst000066400000000000000000000052631302104371500213630ustar00rootroot00000000000000^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Changelog for package vision_opencv ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1.12.3 (2016-12-04) ------------------- 1.12.2 (2016-09-24) ------------------- 1.12.1 (2016-07-11) ------------------- 1.12.0 (2016-03-18) ------------------- * remove opencv_apps from vision_opencv * Contributors: Vincent Rabaud 1.11.12 (2016-03-10) -------------------- 1.11.11 (2016-01-31) -------------------- 1.11.10 (2016-01-16) -------------------- 1.11.9 (2015-11-29) ------------------- * Add opencv_apps to vision_opencv dependency * Contributors: Ryohei Ueda 1.11.8 (2015-07-15) ------------------- 1.11.7 (2014-12-14) ------------------- 1.11.6 (2014-11-16) ------------------- 1.11.5 (2014-09-21) ------------------- 1.11.4 (2014-07-27) ------------------- 1.11.3 (2014-06-08) ------------------- 1.11.2 (2014-04-28) ------------------- 1.11.1 (2014-04-16) ------------------- 1.11.0 (2014-02-15) ------------------- 1.10.15 (2014-02-07) -------------------- 1.10.14 (2013-11-23 16:17) -------------------------- * Contributors: Vincent Rabaud 1.10.13 (2013-11-23 09:19) -------------------------- * Contributors: Vincent Rabaud 1.10.12 (2013-11-22) -------------------- * Contributors: Vincent Rabaud 1.10.11 (2013-10-23) -------------------- * Contributors: Vincent Rabaud 1.10.10 (2013-10-19) -------------------- * Contributors: Vincent Rabaud 1.10.9 (2013-10-07) ------------------- * Contributors: Vincent Rabaud 1.10.8 (2013-09-09) ------------------- * update email address * Contributors: Vincent Rabaud 1.10.7 (2013-07-17) ------------------- * update to REP 0127 * Contributors: Vincent Rabaud 1.10.6 (2013-03-01) ------------------- 1.10.5 (2013-02-11) ------------------- 1.10.4 (2013-02-02) ------------------- 1.10.3 (2013-01-17) ------------------- 1.10.2 (2013-01-13) ------------------- 1.10.1 (2013-01-10) ------------------- 1.10.0 (2013-01-03) ------------------- 1.9.15 (2013-01-02) ------------------- 1.9.14 (2012-12-30) ------------------- 1.9.13 (2012-12-15) ------------------- 1.9.12 (2012-12-14) ------------------- 1.9.11 (2012-12-10) ------------------- 1.9.10 (2012-10-04) ------------------- * the CMake file is useless * add the missing CMake file * re-add the meta-package * Contributors: Vincent Rabaud 1.9.9 (2012-10-01) ------------------ 1.9.8 (2012-09-30) ------------------ 1.9.7 (2012-09-28 21:07) ------------------------ 1.9.6 (2012-09-28 15:17) ------------------------ 1.9.5 (2012-09-15) ------------------ 1.9.4 (2012-09-13) ------------------ 1.9.3 (2012-09-12) ------------------ 1.9.2 (2012-09-07) ------------------ 1.9.1 (2012-08-28 22:06) ------------------------ 1.9.0 (2012-08-28 14:29) ------------------------ vision_opencv-1.12.3/vision_opencv/CMakeLists.txt000066400000000000000000000001601302104371500220710ustar00rootroot00000000000000cmake_minimum_required(VERSION 2.8.3) project(vision_opencv) find_package(catkin REQUIRED) catkin_metapackage() vision_opencv-1.12.3/vision_opencv/package.xml000066400000000000000000000014251302104371500214530ustar00rootroot00000000000000 vision_opencv 1.12.3 Packages for interfacing ROS with OpenCV, a library of programming functions for real time computer vision. Patrick Mihelich James Bowman Vincent Rabaud BSD http://www.ros.org/wiki/vision_opencv https://github.com/ros-perception/vision_opencv/issues https://github.com/ros-perception/vision_opencv catkin cv_bridge image_geometry