image-0.25.5/.cargo_vcs_info.json0000644000000001360000000000100122070ustar { "git": { "sha1": "317bc1692c134d750360ca9c8e108a08ff534d93" }, "path_in_vcs": "" }image-0.25.5/CHANGES.md000064400000000000000000000734401046102023000124010ustar 00000000000000# Release Notes ## Known issues - Many decoders will panic on malicious input. - The color space information of pixels is not clearly communicated. ## Changes ### Version 0.25.5 Features: - Added support for decoding 10-bit and 12-bit AVIF - Initial, opt-in serde support for an enum. This may be extended to other types in the future. Bug fixes: - [Multiple bug fixes in AVIF decoding](https://github.com/image-rs/image/pull/2373) - The `rayon` feature now correctly toggles the use of `rayon` when encoding AVIF. (Previously it would be either always on or always off depending on the version of the `ravif` crate in your dependency tree.) - "jfif" file extension for JPEG images is now recognized ### Version 0.25.4 Features: - Much faster decoding of lossless WebP due to a variety of optimizations. Our benchmarks show 2x to 2.5x improvement. - Added support for orientation metadata, so that e.g. smartphone camera images could be displayed correctly: - Added `ImageDecoder::orientation()` and implemented orientation metadata extraction for JPEG, WebP and TIFF formats - Added `DynamicImage::apply_orientation()` to apply the orientation to an image - Added support for extracting Exif metadata from images via `ImageDecoder::exif_metadata()`, and implemented it for JPEG and WebP formats - Added `ImageEncoder::set_icc_profile()` and implemented it for WebP format. Pull requests with implementations for other formats are welcome. - Added `DynamicImage::fast_blur()` for a linear-time approximation of Gaussian blur, which is much faster at larger blur radii Bug fixes: - Fixed some APNG images being decoded incorrectly - Fixed the iterator over animated WebP frames to return `None` instead of an error when the end of the animation is reached ### Version 0.25.3 Yanked! This version accidentally missed a commit that should have been included with the release. The `Orientation` struct should be in the appropriate module instead of the top-level. This release won't be supported. ### Version 0.25.2 Features: - Added the HDR encoder to supported formats in generic write methods with the `hdr` feature enabled. Supports 32-bit float RGB color only, for now. - When cloning `ImageBuffer`, `DynamicImage` and `Frame` the existing buffer will now be reused if possible. - Added `image::ImageReader` as an alias. - Implement `ImageEncoder` for `HdrEncoder`. Structural changes - Switch from `byteorder` to `byteorder-lite`, consolidating some casting unsafety to `bytemuck`. - Many methods on `DynamicImage` and buffers gained `#[must_use]` indications. Bug fixes: - Removed test data included in the crate archive. - The WebP animation decoder stops when reaching the indicate frame count. - Fixed bugs in the `bmp` decoder. - Format support gated on the `exr` feature now compiles in isolation. ### Version 0.25.1 Bug fixes: - Fixed corrupt JPEG output when attempting to encode images containing an alpha channel. - Only accept ".ff" file extension for farbfeld images. - Correct farbfeld feature flag for `ImageFormat::{reading_enabled, writing_enabled}`. - Disable strict mode for JPEG decoder. - Add nasm feature to optionally enable faster AVIF encoding. ### Version 0.25.0 Breaking changes: - Added `BufRead` + `Seek` bound on many decoders. - Use `ExtendedColorType` instead of `ColorType` when encoding. - Removed `ImageOutputFormat`, `GenericImageView::bounds`, and several other deprecated items. - Removed incremental decoding support and changed `ImageDecoder` so the trait is object safe. - Pixel types are now `repr(transparent)` rather than `repr(C)`. - Made color_quant dependency optional. - Renamed some feature flags. Structural changes: - Increased MSRV to 1.67.1 Codec changes: - Switched to image-webp for WebP encoding. - Switched to zune-jpeg for JPEG decoding. - Made the HDR decoder produce f32 images. - Removed DXT encoding and decoding support. ### Version 0.24.9 Structural changes: - Relicense to MIT OR Apache-2.0 - Increase MSRV 1.63.0 New features: - Support limits in PNG animation decoding. - Added offsets to SubImage to compensate for the now-deprecated bounds call from GenericImageView. Bug fixes: - Correct limit tests for TIFF. - Avoid overflow in gif::Decoder::buffer_size. - Return error instead of using asssertion for Avif decoder unsupported or invalid bit depth. ### Version 0.24.8 New features: - Added pure-Rust lossless WebP encoding. - Added `DynamicImage::new` method. - Added `PngDecoder::gamma_value` method. - Added `ImageFormat::{reading_enabled, writing_enabled, all}`. - TGA encoder now supports RLE encoding. - Add rayon parallel iterators behind an optional `rayon` feature. - Support CMYK TIFF images. - Implement `From` for all image types. Bug fixes: - Fix decoding pngs with invalid text chunks. - Handle non-fatal error dav1d::Error::Again. - Do not round floats in interpolate. - PNM decoder now scales samples according to specified maximum. - Fix wrong implementation of unsharpen filter. - Fix `GifDecoder::with_limits` to raise an error when limits are exceeded. ### Version 0.24.7 New features: - Added `{ImageBuffer, DynamicImage}::write_with_encoder` to simplify writing images with custom settings. - Expose ICC profiles stored in tiff and webp files. - Added option to set the background color of animated webp images. - New methods for sampling and interpolation of `GenericImageView`s Bug fixes: - Fix panic on empty dxt. - Fix several panics in webp decoder. - Allow unknown chunks at the end of webp files. ### Version 0.24.6 - Add support for QOI. - ImageDecoders now expose ICC profiles on supported formats. - Add support for BMPs without a file header. - Improved AVIF encoder. - WebP decoding fixes. ### Version 0.24.5 Structural changes: - Increased the minimum supported Rust version (MSRV) to 1.61. - Increased the version requirement for the `tiff` crate to 0.8.0. - Increased the version requirement for the `jpeg` crate to 0.3.0. Bug fixes: - The `as_rgb32f` function of `DynamicImage` is now correctly documented. - Fixed a crash when decoding ICO images. Added a regression test. - Fixed a panic when transforming webp images. Added a regression test. - Added a check to prevent integer overflow when calculating file size for BMP images. The missing check could panic in debug mode or else set an incorrect file size in release mode. - Upgraded the PNG image encoder to use the newer `PngEncoder::write_image` instead of the deprecated `PngEncoder::encode` which did not account for byte order and could result in images with incorrect colors. - Fixed `InsufficientMemory` error when trying to decode a PNG image. - Fix warnings and CI issues. - Typos and links in the documentation have been corrected. Performance: - Added check for dynamic image dimensions before resizing. This improves performance in cases where the image does not need to be resized or has already been resized. ### Version 0.24.4 New Features: - Encoding for `webp` is now available with the native library. This needs to be activate explicitly with the `web-encoder` feature. - `exr` decoding has gained basic limit support. Bug fixes: - The `Iterator::size_hint` implementation of pixel iterators has been fixed to return the current length indicated by its `ExactSizeIterator` hint. - Typos and bad references in the documentation have been removed. Performance: - `ImageBuffer::get_pixel{,_mut}` is now marked inline. - `resize` now short-circuits when image dimensions are unchanged. ### Version 0.24.3 New Features: - `TiffDecoder` now supports setting resource limits. Bug fixes: - Fix compile issues on little endian systems. - Various panics discovered by fuzzing. ### Version 0.24.2 Structural changes: - CI now runs `cargo-deny`, checking dependent crates to an OSS license list and against RUSTSEC advisories. New Features: - The WebP decoder recognizes and decodes images with `VP8X` header. - The DDS decoder recognizes and decodes images with `DX10` headers. Bug fixes: - Calling `DynamicImage`/`ImageBuffer`'s methods `write_to` and `save` will now work properly even if the backing container is larger than the image layout requires. Only the relevant slice of pixel data is passed to the encoder. - Fixed a OOM-panic caused by malformed images in the `gif` decoder. ### Version 0.24.1 Bug Fixes: - ImageBuffer::get_pixel_checked would sometimes return the incorrect pixel. - PNG encoding would sometimes not recognize unsupported color. ### Version 0.24.0 Breaking changes Structural changes: - Minimum Rust version is now `1.56` and may change in minor versions until further notice. It is now tracked in the library's `Cargo.toml`, instead, by the standard `[package.rust-version]` field. Note: this applies _to the library itself_. You may need different version resolutions for dependencies when using a non-stable version of Rust. - The `math::utils::{nq, utils}` modules have been removed. These are better served through the `color_quant` crate and the standard library respectively. - All codecs are now available through `image::codecs`, no longer top-level. - `ExtendedColorType` and `DynamicImage` have been made `#[non_exhaustive]`, providing more methods instead of exhaustive matching. - Reading images through the generic `io::Reader`, as well as generic convenience interfaces, now requires the underlying reader to be `BufRead + Seek`. This allows more efficient support more formats. Similarly, writing now requires writers to be `Write + Seek`. - The `Bgra*` variants of buffers, which were only half-supported, have been removed. The owning buffer types `ImageBuffer` and `DynamicImage` fundamentally already make a choice in supported pixel representations. This allows for more consistent internal behavior. Callers are expected to convert formats when using those buffers, which they are required to do in any case already, and which is routinely performed by decoders. Trait reworks: - The `Pixel` trait is no longer implemented quite as liberally for structs defined in the crate. Instead, it is now restricted to a set of known channel which ensures accuracy in computations involving those channels. - The `ImageDecoderExt` trait has been renamed to `ImageDecoderRect`, according to its actual functionality. - The `Pixel` trait and its `Subpixel` field no longer require (or provide) a `'static` lifetime bound. - The `Pixel` trait no longer requires specifying an associated, constant `ColorType`. This was of little relevance to computation but made it much harder to implement and extend correctly. Instead, the _private_ `PixelWithColorType` extension is added for interfaces that require a properly known variant. - Reworked how `SubImage` interacts with the `GenericImage` trait. It is now a default implementation. Note that `SubImage` now has _inherent_ methods that avoid double-indirection, the trait's method will no longer avoid this. - The `Primitive` trait now requires implementations to provide a minimum and maximum logical bound for the purpose of converting to other primitive representations. Additions Image formats: - Reading lossless WebP is now supported. - The OpenEXR format is now supported. - The `jpeg` decoder has been upgraded to Lossless JPEG. - The `AvifEncoder` now correctly handles alpha-less images. Some additional color formats are converted to RGBA as well. - The `Bmp` codec now decodes more valid images. It can decode a raw image without performing the palette mapping. It provides a method to access the palette. The encoder provides the inverse capabilities. - `Tiff` is now an output format. Buffers and Operations: - The channel / primitive type `f32` is now supported. Currently only the OpenEXR codec makes full use of it but this is expected to change. - `ImageBuffer::{get_pixel_checked, get_pixel_mut_checked}` provide panic-free access to pixels and channels by returning `Option<&P>` and `Option<&mut P>`. - `ImageBuffer::write_to` has been added, encoding the buffer to a writer. This method already existed on `DynamicImage`. - `DynamicImage` now implements `From<_>` for all supported buffer types. - `DynamicImage` now implements `Default`, an empty `Rgba8` image. - `imageops::overlay` now takes coordinates as `i64`. Limits: - Added `Limits` and `LimitSupport`, utilized in `io::Reader`. These can be configured for rudimentary protection against resource exhaustion (images pretending to require a very large buffer). These types are not yet exhaustive by design, and more and stricter limits may be added in the future. - Encoders that do provide inherent support for limits, or reserve a significant amount of internal memory, are urged to implement the `set_limits` extension to `ImageDecoder`. Some strict limit are opt-in, which may cause decoding to fail if not supported. Miscellaneous: - `PNMSubtype` has been renamed to `PnmSubtype`, by Rust's naming scheme. - Several incorrectly capitalized `PNM*` aliases have been removed. - Several `enum` types that had previously used a hidden variant now use the official `#[non_exhaustive]` attribute instead. ### Version 0.23.14 - Unified gif blending in different decode methods, fixing out-of-bounds checks in a number of weirdly positioned frames. - Hardened TGA decoder against a number of malicious inputs. - Fix forward incompatible usage of the panic macro. - Fix load_rect for gif reaching `unreachable!()` code. - Added `ExtendedColorType::A8`. - Allow TGA to load alpha-only images. - Optimized load_rect to avoid unnecessary seeks. ### Version 0.23.13 - Fix an inconsistency in supported formats of different methods for encoding an image. - Fix `thumbnail` choosing an empty image. It now always prefer non-empty image dimensions. - Fix integer overflow in calculating requires bytes for decoded image buffers for farbfeld, hdr, and pnm decoders. These will now error early. - Fix a panic decoding certain `jpeg` image without frames or meta data. - Optimized the `jpeg` encoder. - Optimized `GenericImage::copy_from` default impl in various cases. - Add `avif` decoders. You must enable it explicitly and it is not covered by our usual MSRV policy of Rust 1.34. Instead, only latest stable is supported. - Add `ImageFormat::{can_read, can_write}` - Add `Frame::buffer_mut` - Add speed and quality options on `avif` encoder. - Add speed parameter to `gif` encoder. - Expose control over sequence repeat to the `gif` encoder. - Add `{contrast,brighten,huerotate}_in_place` functions in imageproc. - Relax `Default` impl of `ImageBuffer`, removing the bound on the color type. - Derive Debug, Hash, PartialEq, Eq for DynamicImage ### Version 0.23.12 - Fix a soundness issue affecting the impls of `Pixel::from_slice_mut`. This would previously reborrow the mutable input reference as a shared one but then proceed to construct the mutable result reference from it. While UB according to Rust's memory model, we're fairly certain that no miscompilation can happen with the LLVM codegen in practice. See 5cbe1e6767d11aff3f14c7ad69a06b04e8d583c7 for more details. - Fix `imageops::blur` panicking when `sigma = 0.0`. It now defaults to `1.0` as all negative values. - Fix re-exporting `png::{CompressionType, FilterType}` to maintain SemVer compatibility with the `0.23` releases. - Add `ImageFormat::from_extension` - Add copyless DynamicImage to byte slice/vec conversion. - Add bit-depth specific `into_` and `to_` DynamicImage conversion methods. ### Version 0.23.11 - The `NeuQuant` implementation is now supplied by `color_quant`. Use of the type defined by this library is discouraged. - The `jpeg` decoder can now downscale images that are decoded by 1,2,4,8. - Optimized the jpeg encoding ~5-15%. - Deprecated the `clamp` function. Use `num-traits` instead. - The ICO decoder now accepts an empty mask. - Fixed an overflow in ICO mask decoding potentially leading to panic. - Added `ImageOutputFormat` for `AVIF` - Updated `tiff` to `0.6` with lzw performance improvements. ### Version 0.23.10 - Added AVIF encoding capabilities using the `ravif` crate. Please note that the feature targets the latest stable compiler and is not enabled by default. - Added `ImageBuffer::as_raw` to inspect the underlying container. - Updated `gif` to `0.11` with large performance improvements. ### Version 0.23.9 - Introduced correctly capitalized aliases for some scream case types - Introduced `imageops::{vertical_gradient, horizontal_gradient}` for writing simple color gradients into an image. - Sped up methods iterating over `Pixels`, `PixelsMut`, etc. by using exact chunks internally. This should auto-vectorize `ImageBuffer::from_pixel`. - Adjusted `Clone` impls of iterators to not require a bound on the pixel. - Add `Debug` impls for iterators where the pixel's channel implements it. - Add comparison impls for `FilterType` ### Version 0.23.8 - `flat::Error` now implements the standard `Error` trait - The type parameter of `Map` has been relaxed to `?Sized` - Added the `imageops::tile` function that repeats one image across another ### Version 0.23.7 - Iterators over immutable pixels of `ImageBuffer` can now be cloned - Added a `tga` encoder - Added `ColorMap::lookup`, an optional reversal of the map - The `EncodableLayout` trait is now exported ### Version 0.23.6 - Added `png::ApngDecoder`, an adapter decoding the animation in an APNG. - Fixed a bug in `jpeg` encoding that would darken output colors. - Added a utility constructor `FlatSamples::with_monocolor`. - Added `ImageBuffer::as_flat_samples_mut` which is a mutable variant of the existing ffi-helper `ImageBuffer::as_flat_samples`. ### Version 0.23.5 - The `png` encoder now allows configuring compression and filter type. The output is not part of stability guarantees, see its documentation. - The `jpeg` encoder now accepts any implementor of `GenericImageView`. This allows images that are only partially present in memory to be encoded. - `ImageBuffer` now derives `Hash`, `PartialEq`, `Eq`. - The `Pixels`/`PixelsMut` iterator no longer yields out-of-bounds pixels when the underlying buffer is larger than required. - The `pbm` decoder correctly decodes ascii data again, fixing a regression where it would use the sample value `1` as white instead of `255`. - Fix encoding of RGBA data in `gif` frames. - Constructing a `Rows`/`RowsMut` iterator no longer panics when the image has a width or height of `0`. ### Version 0.23.4 - Improved the performance of decoding animated gifs - Added `crop_imm` which functions like `crop` but on a shared reference - The gif `DisposalMethod::Any` is treated as `Keep`, consistent with browsers - Most errors no longer allocate a string, instead implement Display. - Add some implementations of `Error::source` ### Version 0.23.3 - Added `ColorType::has_alpha` to facilitate lossless conversion - Recognize extended WebP formats for decoding - Added decoding and encoding for the `farbfeld` format - Export named iterator types created from various `ImageBuffer` methods - Error in jpeg encoder for images larger than 65536 pixels, fixes panic ### Version 0.23.2 - The dependency on `jpeg-decoder` now reflects minimum requirements. ### Version 0.23.1 - Fix cmyk_to_rgb (jpeg) causing off by one rounding errors. - A number of performance improvements for jpeg (encode and decode), bmp, vp8 - Added more details to errors for many formats ### Version 0.23.0 This major release intends to improve the interface with regards to handling of color format data and errors for both decoding and encoding. This necessitated many breaking changes anyways so it was used to improve the compliance to the interface guidelines such as outstanding renaming. It is not yet perfect with regards to color spaces but it was designed mainly as an improvement over the current interface with regards to in-memory color formats, first. We'll get to color spaces in a later major version. - Heavily reworked `ColorType`: - This type is now used for denoting formats for which we support operations on buffers in these memory representations. Particularly, all channels in pixel types are assumed to be an integer number of bytes (In terms of the Rust type system, these are `Sized` and one can crate slices of channel values). - An `ExtendedColorType` is used to express more generic color formats for which the library has limited support but can be converted/scaled/mapped into a `ColorType` buffer. This operation might be fallible but, for example, includes sources with 1/2/4-bit components. - Both types are non-exhaustive to add more formats in a minor release. - A work-in-progress (#1085) will further separate the color model from the specific channel instantiation, e.g. both `8-bit RGB` and `16-bit BGR` are instantiations of `RGB` color model. - Heavily rework `ImageError`: - The top-level enum type now serves to differentiate cause with multiple opaque representations for the actual error. These are no longer simple Strings but contains useful types. Third-party decoders that have no variant in `ImageFormat` have also been considered. - Support for `Error::source` that can be downcast to an error from a matching version of the underlying decoders. Note that the version is not part of the stable interface guarantees, this should not be relied upon for correctness and only be used as an optimization. - Added image format indications to errors. - The error values produced by decoder will be upgraded incrementally. See something that still produces plain old String messages? Feel free to send a PR. - Reworked the `ImageDecoder` trait: - `read_image` takes an output buffer argument instead of allocating all memory on its own. - The return type of `dimensions` now aligns with `GenericImage` sizes. - The `colortype` method was renamed to `color_type` for conformity. - The enums `ColorType`, `DynamicImage`, `imageops::FilterType`, `ImageFormat` no longer re-export all of their variants in the top-level of the crate. This removes the growing pollution in the documentation and usage. You can still insert the equivalent statement on your own: `use image::ImageFormat::{self, *};` - The result of `encode` operations is now uniformly an `ImageResult<()>`. - Removed public converters from some `tiff`, `png`, `gif`, `jpeg` types, mainly such as error conversion. This allows upgrading the dependency across major versions without a major release in `image` itself. - On that note, the public interface of `gif` encoder no longer takes a `gif::Frame` but rather deals with `image::Frame` only. If you require to specify the disposal method, transparency, etc. then you may want to wait with upgrading but (see next change). - The `gif` encoder now errors on invalid dimensions or unsupported color formats. It would previously silently reinterpret bytes as RGB/RGBA. - The capitalization of `ImageFormat` and other enum variants has been adjusted to adhere to the API guidelines. These variants are now spelled `Gif`, `Png`, etc. The same change has been made to the name of types such as `HDRDecoder`. - The `Progress` type has finally received public accessor method. Strange that no one reported them missing. - Introduced `PixelDensity` and `PixelDensityUnit` to store DPI information in formats that support encoding this form of meta data (e.g. in `jpeg`). ### Version 0.22.5 - Added `GenericImage::copy_within`, specialized for `ImageBuffer` - Fixed decoding of interlaced `gif` files - Prepare for future compatibility of array `IntoIterator` in example code ### Version 0.22.4 - Added in-place variants for flip and rotate operations. - The bmp encoder now checks if dimensions are valid for the format. It would previously write a subset or panic. - Removed deprecated implementations of `Error::description` - Added `DynamicImage::into_*` which convert without an additional allocation. - The PNG encoder errors on unsupported color types where it had previously silently swapped color channels. - Enabled saving images as `gif` with `save_buffer`. ### Version 0.22.3 - Added a new module `io` containing a configurable `Reader`. It can replace the bunch of free functions: `image::{load_*, open, image_dimensions}` while enabling new combinations such as `open` but with format deduced from content instead of file path. - Fixed `const_err` lint in the macro expanded implementations of `Pixel`. This can only affect your crate if `image` is used as a path dependency. ### Version 0.22.2 - Undeprecate `unsafe` trait accessors. Further evaluation showed that their deprecation should be delayed until trait `impl` specialization is available. - Fixed magic bytes used to detect `tiff` images. - Added `DynamicImage::from_decoder`. - Fixed a bug in the `PNGReader` that caused an infinite loop. - Added `ColorType::{bits_per_pixel, num_components}`. - Added `ImageFormat::from_path`, same format deduction as the `open` method. - Fixed a panic in the gif decoder. - Aligned background color handling of `gif` to web browser implementations. - Fixed handling of partial frames in animated `gif`. - Removed unused direct `lzw` dependency, an indirect dependency in `tiff`. ### Version 0.22.1 - Fixed build without no features enabled ### Version 0.22 - The required Rust version is now `1.34.2`. - Note the website and blog: [image-rs.org][1] and [blog.image-rs.org][2] - `PixelMut` now only on `ImageBuffer` and removed from `GenericImage` interface. Prefer iterating manually in the generic case. - Replaced an unsafe interface in the hdr decoder with a safe variant. - Support loading 2-bit BMP images - Add method to save an `ImageBuffer`/`DynamicImage` with specified format - Update tiff to `0.3` with a writer - Update png to `0.15`, fixes reading of interlaced sub-byte pixels - Always use custom struct for `ImageDecoder::Reader` - Added `apply_without_alpha` and `map_without_alpha` to `Pixel` trait - Pixel information now with associated constants instead of static methods - Changed color structs to tuple types with single component. Improves ergonomics of destructuring assignment and construction. - Add lifetime parameter on `ImageDecoder` trait. - Remove unnecessary `'static` bounds on affine operations - Add function to retrieve image dimensions without loading full image - Allow different image types in overlay and replace - Iterators over rows of `ImageBuffer`, mutable variants [1]: https://www.image-rs.org [2]: https://blog.image-rs.org ### Version 0.21.2 - Fixed a variety of crashes and opaque errors in webp - Updated the png limits to be less restrictive - Reworked even more `unsafe` operations into safe alternatives - Derived Debug on FilterType and Deref on Pixel - Removed a restriction on DXT to always require power of two dimensions - Change the encoding of RGBA in bmp using bitfields - Corrected various urls ### Version 0.21.1 - A fairly important bugfix backport - Fixed a potentially memory safety issue in the hdr and tiff decoders, see #885 - See [the full advisory](docs/2019-04-23-memory-unsafety.md) for an analysis - Fixes `ImageBuffer` index calculation for very, very large images - Fix some crashes while parsing specific incomplete pnm images - Added comprehensive fuzzing for the pam image types ### Version 0.21 - Updated README to use `GenericImageView` - Removed outdated version number from CHANGES - Compiles now with wasm-unknown-emscripten target - Restructured `ImageDecoder` trait - Updated README with a more colorful example for the Julia fractal - Use Rust 1.24.1 as minimum supported version - Support for loading GIF frames one at a time with `animation::Frames` - The TGA decoder now recognizes 32 bpp as RGBA(8) - Fixed `to_bgra` document comment - Added release test script - Removed unsafe code blocks several places - Fixed overlay overflow bug issues with documented proofs ### Version 0.20 - Clippy lint pass - Updated num-rational dependency - Added BGRA and BGR color types - Improved performance of image resizing - Improved PBM decoding - PNM P4 decoding now returns bits instead of bytes - Fixed move of overlapping buffers in BMP decoder - Fixed some document comments - `GenericImage` and `GenericImageView` is now object-safe - Moved TIFF code to its own library - Fixed README examples - Fixed ordering of interpolated parameters in TIFF decode error string - Thumbnail now handles upscaling - GIF encoding for multiple frames - Improved subimages API - Cargo fmt fixes ### Version 0.19 - Fixed panic when blending with alpha zero. - Made `save` consistent. - Consistent size calculation. - Fixed bug in `apply_with_alpha`. - Implemented `TGADecoder::read_scanline`. - Use deprecated attribute for `pixels_mut`. - Fixed bug in JPEG grayscale encoding. - Fixed multi image TIFF. - PNM encoder. - Added `#[derive(Hash)]` for `ColorType`. - Use `num-derive` for `#[derive(FromPrimitive)]`. - Added `into_frames` implementation for GIF. - Made rayon an optional dependency. - Fixed issue where resizing image did not give exact width/height. - Improved downscale. - Added a way to expose options when saving files. - Fixed some compiler warnings. - Switched to lzw crate instead of using built-in version. - Added `ExactSizeIterator` implementations to buffer structs. - Added `resize_to_fill` method. - DXT encoding support. - Applied clippy suggestions. ### Version 0.4 - Various improvements. - Additional supported image formats (BMP and ICO). - GIF and PNG codec moved into separate crates. ### Version 0.3 - Replace `std::old_io` with `std::io`. ### Version 0.2 - Support for interlaced PNG images. - Writing support for GIF images (full color and paletted). - Color quantizer that converts 32bit images to paletted including the alpha channel. - Initial support for reading TGA images. - Reading support for TIFF images (packbits and FAX compression not supported). - Various bug fixes and improvements. ### Version 0.1 - Initial release - Basic reading support for png, jpeg, gif, ppm and webp. - Basic writing support for png and jpeg. - A collection of basic imaging processing function like `blur` or `invert` image-0.25.5/Cargo.lock0000644000001216630000000000100101730ustar # This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "adler" version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" [[package]] name = "ahash" version = "0.7.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fcb51a0695d8f838b1ee009b3fbf66bda078cd64590202a864a8f3e8c4315c47" dependencies = [ "getrandom", "once_cell", "version_check", ] [[package]] name = "aho-corasick" version = "0.7.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b4f55bd91a0978cbfd91c457a164bab8b4001c833b7f323132c0a4e1922dd44e" dependencies = [ "memchr", ] [[package]] name = "aligned-vec" version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4aa90d7ce82d4be67b64039a3d588d38dbcc6736577de4a847025ce5b0c468d1" [[package]] name = "anes" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4b46cbb362ab8752921c97e041f5e366ee6297bd428a31275b9fcf1e380f7299" [[package]] name = "anyhow" version = "1.0.65" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "98161a4e3e2184da77bb14f02184cdd111e83bbbcc9979dfee3c44b9a85f5602" [[package]] name = "arbitrary" version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7d5a26814d8dcb93b0e5a0ff3c6d80a8843bafb21b39e8e18a6f05471870e110" [[package]] name = "arg_enum_proc_macro" version = "0.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0ae92a5119aa49cdbcf6b9f893fe4e1d98b04ccbf82ee0584ad948a44a734dea" dependencies = [ "proc-macro2", "quote", "syn 2.0.87", ] [[package]] name = "arrayvec" version = "0.7.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50" [[package]] name = "autocfg" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa" [[package]] name = "av-data" version = "0.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d75b98a3525d00f920df9a2d44cc99b9cc5b7dc70d7fbb612cd755270dbe6552" dependencies = [ "byte-slice-cast", "bytes", "num-derive", "num-rational", "num-traits", "thiserror", ] [[package]] name = "av1-grain" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6f6ca6f0c18c02c2fbfc119df551b8aeb8a385f6d5980f1475ba0255f1e97f1e" dependencies = [ "anyhow", "arrayvec", "itertools 0.10.5", "log", "nom", "num-rational", "v_frame", ] [[package]] name = "avif-serialize" version = "0.8.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e335041290c43101ca215eed6f43ec437eb5a42125573f600fc3fa42b9bddd62" dependencies = [ "arrayvec", ] [[package]] name = "bit_field" version = "0.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dcb6dd1c2376d2e096796e234a70e17e94cc2d5d54ff8ce42b28cef1d0d359a4" [[package]] name = "bitflags" version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a" [[package]] name = "bitflags" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf4b9d6a944f767f8e5e0db018570623c85f3d925ac718db4e06d0187adb21c1" [[package]] name = "bitreader" version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d84ea71c85d1fe98fe67a9b9988b1695bc24c0b0d3bfb18d4c510f44b4b09941" dependencies = [ "cfg-if", ] [[package]] name = "bitstream-io" version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06c9989a51171e2e81038ab168b6ae22886fe9ded214430dbb4f41c28cf176da" [[package]] name = "built" version = "0.7.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "41bfbdb21256b87a8b5e80fab81a8eed158178e812fd7ba451907518b2742f16" [[package]] name = "bumpalo" version = "3.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c1ad822118d20d2c234f427000d5acc36eabe1e29a348c89b63dd60b13f28e5d" [[package]] name = "byte-slice-cast" version = "1.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3ac9f8b63eca6fd385229b3675f6cc0dc5c8a5c8a54a59d4f52ffd670d87b0c" [[package]] name = "bytemuck" version = "1.18.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "94bbb0ad554ad961ddc5da507a12a29b14e4ae5bda06b19f575a3e6079d2e2ae" [[package]] name = "byteorder" version = "1.4.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610" [[package]] name = "byteorder-lite" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f1fe948ff07f4bd06c30984e69f5b4899c516a3ef74f34df92a2df2ab535495" [[package]] name = "bytes" version = "1.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9" [[package]] name = "cast" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5" [[package]] name = "cc" version = "1.0.73" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2fff2a6927b3bb87f9595d67196a70493f627687a71d87a0d692242c33f58c11" dependencies = [ "jobserver", ] [[package]] name = "cfg-expr" version = "0.15.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d067ad48b8650848b989a59a86c6c36a995d02d2bf778d45c3c5d57bc2718f02" dependencies = [ "smallvec", "target-lexicon", ] [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "ciborium" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "42e69ffd6f0917f5c029256a24d0161db17cea3997d185db0d35926308770f0e" dependencies = [ "ciborium-io", "ciborium-ll", "serde", ] [[package]] name = "ciborium-io" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "05afea1e0a06c9be33d539b876f1ce3692f4afea2cb41f740e7743225ed1c757" [[package]] name = "ciborium-ll" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "57663b653d948a338bfb3eeba9bb2fd5fcfaecb9e199e87e1eda4d9e8b240fd9" dependencies = [ "ciborium-io", "half", ] [[package]] name = "clap" version = "4.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f13b9c79b5d1dd500d20ef541215a6423c75829ef43117e1b4d17fd8af0b5d76" dependencies = [ "bitflags 1.3.2", "clap_lex", ] [[package]] name = "clap_lex" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "783fe232adfca04f90f56201b26d79682d4cd2625e0bc7290b95123afe558ade" dependencies = [ "os_str_bytes", ] [[package]] name = "color_quant" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3d7b894f5411737b7867f4827955924d7c254fc9f4d91a6aad6b097804b1018b" [[package]] name = "crc32fast" version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b540bd8bc810d3885c6ea91e2018302f68baba2129ab3e88f32389ee9370880d" dependencies = [ "cfg-if", ] [[package]] name = "criterion" version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f2b12d017a929603d80db1831cd3a24082f8137ce19c69e6447f54f5fc8d692f" dependencies = [ "anes", "cast", "ciborium", "clap", "criterion-plot", "is-terminal", "itertools 0.10.5", "num-traits", "once_cell", "oorandom", "plotters", "rayon", "regex", "serde", "serde_derive", "serde_json", "tinytemplate", "walkdir", ] [[package]] name = "criterion-plot" version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6b50826342786a51a89e2da3a28f1c32b06e387201bc2d19791f622c673706b1" dependencies = [ "cast", "itertools 0.10.5", ] [[package]] name = "crossbeam-deque" version = "0.8.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "715e8152b692bba2d374b53d4875445368fdf21a94751410af607a5ac677d1fc" dependencies = [ "cfg-if", "crossbeam-epoch", "crossbeam-utils", ] [[package]] name = "crossbeam-epoch" version = "0.9.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f916dfc5d356b0ed9dae65f1db9fc9770aa2851d2662b988ccf4fe3516e86348" dependencies = [ "autocfg", "cfg-if", "crossbeam-utils", "memoffset", "scopeguard", ] [[package]] name = "crossbeam-utils" version = "0.8.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "edbafec5fa1f196ca66527c1b12c2ec4745ca14b50f1ad8f9f6f720b55d11fac" dependencies = [ "cfg-if", ] [[package]] name = "crunchy" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7a81dae078cea95a014a339291cec439d2f232ebe854a9d672b796c6afafa9b7" [[package]] name = "dav1d" version = "0.10.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0d4b54a40baf633a71c6f0fb49494a7e4ee7bc26f3e727212b6cb915aa1ea1e1" dependencies = [ "av-data", "bitflags 2.5.0", "dav1d-sys", "static_assertions", ] [[package]] name = "dav1d-sys" version = "0.8.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6ecb1c5e8f4dc438eedc1b534a54672fb0e0a56035dae6b50162787bd2c50e95" dependencies = [ "libc", "system-deps", ] [[package]] name = "either" version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "90e5c1c8368803113bf0c9584fc495a58b86dc8a29edbf8fe877d21d9507e797" [[package]] name = "env_logger" version = "0.8.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a19187fea3ac7e84da7dacf48de0c45d63c6a76f9490dae389aead16c243fce3" dependencies = [ "log", "regex", ] [[package]] name = "equivalent" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5443807d6dff69373d433ab9ef5378ad8df50ca6298caf15de6e52e24aaf54d5" [[package]] name = "exr" version = "1.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8eb5f255b5980bb0c8cf676b675d1a99be40f316881444f44e0462eaf5df5ded" dependencies = [ "bit_field", "flume", "half", "lebe", "miniz_oxide 0.6.2", "smallvec", "threadpool", ] [[package]] name = "fallible_collections" version = "0.4.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c195cf4b2285d3c993eb887b4dc56b0d5728bbe1d0f9a99c0ac6bec2da3e4d85" dependencies = [ "hashbrown 0.12.3", ] [[package]] name = "flate2" version = "1.0.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f82b0f4c27ad9f8bfd1f3208d882da2b09c301bc1c828fd3a00d0216d2fbbff6" dependencies = [ "crc32fast", "miniz_oxide 0.5.4", ] [[package]] name = "flume" version = "0.10.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1657b4441c3403d9f7b3409e47575237dac27b1b5726df654a6ecbf92f0f7577" dependencies = [ "futures-core", "futures-sink", "nanorand", "pin-project", "spin", ] [[package]] name = "futures-core" version = "0.3.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4e5aa3de05362c3fb88de6531e6296e85cde7739cccad4b9dfeeb7f6ebce56bf" [[package]] name = "futures-sink" version = "0.3.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "21b20ba5a92e727ba30e72834706623d94ac93a725410b6a6b6fbc1b07f7ba56" [[package]] name = "getrandom" version = "0.2.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4eb1a864a501629691edf6c15a593b7a51eebaa1e8468e9ddc623de7c9b58ec6" dependencies = [ "cfg-if", "js-sys", "libc", "wasi", "wasm-bindgen", ] [[package]] name = "gif" version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3fb2d69b19215e18bb912fa30f7ce15846e301408695e44e0ef719f1da9e19f2" dependencies = [ "color_quant", "weezl", ] [[package]] name = "glob" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b919933a397b79c37e33b77bb2aa3dc8eb6e165ad809e58ff75bc7db2e34574" [[package]] name = "half" version = "2.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6dd08c532ae367adf81c312a4580bc67f1d0fe8bc9c460520283f4c0ff277888" dependencies = [ "cfg-if", "crunchy", ] [[package]] name = "hashbrown" version = "0.12.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888" dependencies = [ "ahash", ] [[package]] name = "hashbrown" version = "0.14.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1" [[package]] name = "heck" version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" [[package]] name = "hermit-abi" version = "0.1.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33" dependencies = [ "libc", ] [[package]] name = "hermit-abi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024" [[package]] name = "image" version = "0.25.5" dependencies = [ "bytemuck", "byteorder-lite", "color_quant", "crc32fast", "criterion", "dav1d", "exr", "gif", "glob", "image-webp", "mp4parse", "num-complex", "num-traits", "png", "qoi", "quickcheck", "ravif", "rayon", "rgb", "serde", "tiff", "zune-core", "zune-jpeg", ] [[package]] name = "image-webp" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e031e8e3d94711a9ccb5d6ea357439ef3dcbed361798bd4071dc4d9793fbe22f" dependencies = [ "byteorder-lite", "quick-error", ] [[package]] name = "imgref" version = "1.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d0263a3d970d5c054ed9312c0057b4f3bde9c0b33836d3637361d4a9e6e7a408" [[package]] name = "indexmap" version = "2.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "168fb715dda47215e360912c096649d23d58bf392ac62f73919e831745e40f26" dependencies = [ "equivalent", "hashbrown 0.14.5", ] [[package]] name = "interpolate_name" version = "0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c34819042dc3d3971c46c2190835914dfbe0c3c13f61449b2997f4e9722dfa60" dependencies = [ "proc-macro2", "quote", "syn 2.0.87", ] [[package]] name = "is-terminal" version = "0.4.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f23ff5ef2b80d608d61efee834934d862cd92461afc0560dedf493e4c033738b" dependencies = [ "hermit-abi 0.3.9", "libc", "windows-sys", ] [[package]] name = "itertools" version = "0.10.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b0fd2260e829bddf4cb6ea802289de2f86d6a7a690192fbe91b3f46e0f2c8473" dependencies = [ "either", ] [[package]] name = "itertools" version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569" dependencies = [ "either", ] [[package]] name = "itoa" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4217ad341ebadf8d8e724e264f13e593e0648f5b3e94b3896a5df283be015ecc" [[package]] name = "jobserver" version = "0.1.25" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "068b1ee6743e4d11fb9c6a1e6064b3693a1b600e7f5f5988047d98b3dc9fb90b" dependencies = [ "libc", ] [[package]] name = "jpeg-decoder" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bc0000e42512c92e31c2252315bda326620a4e034105e900c98ec492fa077b3e" [[package]] name = "js-sys" version = "0.3.60" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "49409df3e3bf0856b916e2ceaca09ee28e6871cf7d9ce97a692cacfdb2a25a47" dependencies = [ "wasm-bindgen", ] [[package]] name = "lebe" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "03087c2bad5e1034e8cace5926dec053fb3790248370865f5117a7d0213354c8" [[package]] name = "libc" version = "0.2.135" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "68783febc7782c6c5cb401fbda4de5a9898be1762314da0bb2c10ced61f18b0c" [[package]] name = "libfuzzer-sys" version = "0.4.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a96cfd5557eb82f2b83fed4955246c988d331975a002961b07c81584d107e7f7" dependencies = [ "arbitrary", "cc", "once_cell", ] [[package]] name = "lock_api" version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "435011366fe56583b16cf956f9df0095b405b82d76425bc8981c0e22e60ec4df" dependencies = [ "autocfg", "scopeguard", ] [[package]] name = "log" version = "0.4.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "abb12e687cfb44aa40f41fc3978ef76448f9b6038cad6aef4259d3c095a2382e" dependencies = [ "cfg-if", ] [[package]] name = "loop9" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fae87c125b03c1d2c0150c90365d7d6bcc53fb73a9acaef207d2d065860f062" dependencies = [ "imgref", ] [[package]] name = "maybe-rayon" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "57c0bd354d2614d2feeb08dd2afdb744a91ad3c886e2a04dac63e12fb32eb7f7" dependencies = [ "cfg-if", "rayon", ] [[package]] name = "memchr" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2dffe52ecf27772e601905b7522cb4ef790d2cc203488bbd0e2fe85fcb74566d" [[package]] name = "memoffset" version = "0.6.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce" dependencies = [ "autocfg", ] [[package]] name = "minimal-lexical" version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" [[package]] name = "miniz_oxide" version = "0.5.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "96590ba8f175222643a85693f33d26e9c8a015f599c216509b1a6894af675d34" dependencies = [ "adler", ] [[package]] name = "miniz_oxide" version = "0.6.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b275950c28b37e794e8c55d88aeb5e139d0ce23fdbbeda68f8d7174abdf9e8fa" dependencies = [ "adler", ] [[package]] name = "mp4parse" version = "0.17.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "63a35203d3c6ce92d5251c77520acb2e57108c88728695aa883f70023624c570" dependencies = [ "bitreader", "byteorder", "fallible_collections", "log", "num-traits", "static_assertions", ] [[package]] name = "nanorand" version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6a51313c5820b0b02bd422f4b44776fbf47961755c74ce64afc73bfad10226c3" dependencies = [ "getrandom", ] [[package]] name = "nasm-rs" version = "0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ce095842aee9aa3ecbda7a5d2a4df680375fd128a8596b6b56f8e497e231f483" dependencies = [ "rayon", ] [[package]] name = "new_debug_unreachable" version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54" [[package]] name = "nom" version = "7.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a8903e5a29a317527874d0402f867152a3d21c908bb0b933e416c65e301d4c36" dependencies = [ "memchr", "minimal-lexical", ] [[package]] name = "noop_proc_macro" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0676bb32a98c1a483ce53e500a81ad9c3d5b3f7c920c28c24e9cb0980d0b5bc8" [[package]] name = "num-bigint" version = "0.4.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f93ab6289c7b344a8a9f60f88d80aa20032336fe78da341afc91c8a2341fc75f" dependencies = [ "autocfg", "num-integer", "num-traits", ] [[package]] name = "num-complex" version = "0.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7ae39348c8bc5fbd7f40c727a9925f03517afd2ab27d46702108b6a7e5414c19" dependencies = [ "num-traits", ] [[package]] name = "num-derive" version = "0.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ed3955f1a9c7c0c15e092f9c887db08b1fc683305fdf6eb6684f22555355e202" dependencies = [ "proc-macro2", "quote", "syn 2.0.87", ] [[package]] name = "num-integer" version = "0.1.45" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "225d3389fb3509a24c93f5c29eb6bde2586b98d9f016636dff58d7c6f7569cd9" dependencies = [ "autocfg", "num-traits", ] [[package]] name = "num-rational" version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0638a1c9d0a3c0914158145bc76cff373a75a627e6ecbfb71cbe6f453a5a19b0" dependencies = [ "autocfg", "num-bigint", "num-integer", "num-traits", ] [[package]] name = "num-traits" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "578ede34cf02f8924ab9447f50c28075b4d3e5b269972345e7e0372b38c6cdcd" dependencies = [ "autocfg", ] [[package]] name = "num_cpus" version = "1.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "19e64526ebdee182341572e50e9ad03965aa510cd94427a4549448f285e957a1" dependencies = [ "hermit-abi 0.1.19", "libc", ] [[package]] name = "once_cell" version = "1.19.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92" [[package]] name = "oorandom" version = "11.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0ab1bc2a289d34bd04a330323ac98a1b4bc82c9d9fcb1e66b63caa84da26b575" [[package]] name = "os_str_bytes" version = "6.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b7820b9daea5457c9f21c69448905d723fbd21136ccf521748f23fd49e723ee" [[package]] name = "paste" version = "1.0.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b1de2e551fb905ac83f73f7aedf2f0cb4a0da7e35efa24a202a936269f1f18e1" [[package]] name = "pin-project" version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ad29a609b6bcd67fee905812e544992d216af9d755757c05ed2d0e15a74c6ecc" dependencies = [ "pin-project-internal", ] [[package]] name = "pin-project-internal" version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "069bdb1e05adc7a8990dce9cc75370895fbe4e3d58b9b73bf1aee56359344a55" dependencies = [ "proc-macro2", "quote", "syn 1.0.102", ] [[package]] name = "pkg-config" version = "0.3.25" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1df8c4ec4b0627e53bdf214615ad287367e482558cf84b109250b37464dc03ae" [[package]] name = "plotters" version = "0.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2538b639e642295546c50fcd545198c9d64ee2a38620a628724a3b266d5fbf97" dependencies = [ "num-traits", "plotters-backend", "plotters-svg", "wasm-bindgen", "web-sys", ] [[package]] name = "plotters-backend" version = "0.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "193228616381fecdc1224c62e96946dfbc73ff4384fba576e052ff8c1bea8142" [[package]] name = "plotters-svg" version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f9a81d2759aae1dae668f783c308bc5c8ebd191ff4184aaa1b37f65a6ae5a56f" dependencies = [ "plotters-backend", ] [[package]] name = "png" version = "0.17.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f0e7f4c94ec26ff209cee506314212639d6c91b80afb82984819fafce9df01c" dependencies = [ "bitflags 1.3.2", "crc32fast", "flate2", "miniz_oxide 0.5.4", ] [[package]] name = "ppv-lite86" version = "0.2.16" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eb9f9e6e233e5c4a35559a617bf40a4ec447db2e84c20b55a6f83167b7e57872" [[package]] name = "proc-macro2" version = "1.0.89" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f139b0662de085916d1fb67d2b4169d1addddda1919e696f3252b740b629986e" dependencies = [ "unicode-ident", ] [[package]] name = "profiling" version = "1.0.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "43d84d1d7a6ac92673717f9f6d1518374ef257669c24ebc5ac25d5033828be58" dependencies = [ "profiling-procmacros", ] [[package]] name = "profiling-procmacros" version = "1.0.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8021cf59c8ec9c432cfc2526ac6b8aa508ecaf29cd415f271b8406c1b851c3fd" dependencies = [ "quote", "syn 2.0.87", ] [[package]] name = "qoi" version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f6d64c71eb498fe9eae14ce4ec935c555749aef511cca85b5568910d6e48001" dependencies = [ "bytemuck", ] [[package]] name = "quick-error" version = "2.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a993555f31e5a609f617c12db6250dedcac1b0a85076912c436e6fc9b2c8e6a3" [[package]] name = "quickcheck" version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "588f6378e4dd99458b60ec275b4477add41ce4fa9f64dcba6f15adccb19b50d6" dependencies = [ "env_logger", "log", "rand", ] [[package]] name = "quote" version = "1.0.36" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fa76aaf39101c457836aec0ce2316dbdc3ab723cdda1c6bd4e6ad4208acaca7" dependencies = [ "proc-macro2", ] [[package]] name = "rand" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" dependencies = [ "libc", "rand_chacha", "rand_core", ] [[package]] name = "rand_chacha" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" dependencies = [ "ppv-lite86", "rand_core", ] [[package]] name = "rand_core" version = "0.6.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c" dependencies = [ "getrandom", ] [[package]] name = "rav1e" version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cd87ce80a7665b1cce111f8a16c1f3929f6547ce91ade6addf4ec86a8dda5ce9" dependencies = [ "arbitrary", "arg_enum_proc_macro", "arrayvec", "av1-grain", "bitstream-io", "built", "cc", "cfg-if", "interpolate_name", "itertools 0.12.1", "libc", "libfuzzer-sys", "log", "maybe-rayon", "nasm-rs", "new_debug_unreachable", "noop_proc_macro", "num-derive", "num-traits", "once_cell", "paste", "profiling", "rand", "rand_chacha", "simd_helpers", "system-deps", "thiserror", "v_frame", "wasm-bindgen", ] [[package]] name = "ravif" version = "0.11.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2413fd96bd0ea5cdeeb37eaf446a22e6ed7b981d792828721e74ded1980a45c6" dependencies = [ "avif-serialize", "imgref", "loop9", "quick-error", "rav1e", "rayon", "rgb", ] [[package]] name = "rayon" version = "1.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b418a60154510ca1a002a752ca9714984e21e4241e804d32555251faf8b78ffa" dependencies = [ "either", "rayon-core", ] [[package]] name = "rayon-core" version = "1.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1465873a3dfdaa8ae7cb14b4383657caab0b3e8a0aa9ae8e04b044854c8dfce2" dependencies = [ "crossbeam-deque", "crossbeam-utils", ] [[package]] name = "regex" version = "1.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4c4eb3267174b8c6c2f654116623910a0fef09c4753f8dd83db29c48a0df988b" dependencies = [ "aho-corasick", "memchr", "regex-syntax", ] [[package]] name = "regex-syntax" version = "0.6.27" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a3f87b73ce11b1619a3c6332f45341e0047173771e8b8b73f87bfeefb7b56244" [[package]] name = "rgb" version = "0.8.50" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "57397d16646700483b67d2dd6511d79318f9d057fdbd21a4066aeac8b41d310a" [[package]] name = "ryu" version = "1.0.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4501abdff3ae82a1c1b477a17252eb69cee9e66eb915c1abaa4f44d873df9f09" [[package]] name = "same-file" version = "1.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" dependencies = [ "winapi-util", ] [[package]] name = "scopeguard" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd" [[package]] name = "serde" version = "1.0.214" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f55c3193aca71c12ad7890f1785d2b73e1b9f63a0bbc353c08ef26fe03fc56b5" dependencies = [ "serde_derive", ] [[package]] name = "serde_derive" version = "1.0.214" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "de523f781f095e28fa605cdce0f8307e451cc0fd14e2eb4cd2e98a355b147766" dependencies = [ "proc-macro2", "quote", "syn 2.0.87", ] [[package]] name = "serde_json" version = "1.0.86" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "41feea4228a6f1cd09ec7a3593a682276702cd67b5273544757dae23c096f074" dependencies = [ "itoa", "ryu", "serde", ] [[package]] name = "serde_spanned" version = "0.6.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eb3622f419d1296904700073ea6cc23ad690adbd66f13ea683df73298736f0c1" dependencies = [ "serde", ] [[package]] name = "simd_helpers" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "95890f873bec569a0362c235787f3aca6e1e887302ba4840839bcc6459c42da6" dependencies = [ "quote", ] [[package]] name = "smallvec" version = "1.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a507befe795404456341dfab10cef66ead4c041f62b8b11bbb92bffe5d0953e0" [[package]] name = "spin" version = "0.9.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f6002a767bff9e83f8eeecf883ecb8011875a21ae8da43bffb817a57e78cc09" dependencies = [ "lock_api", ] [[package]] name = "static_assertions" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f" [[package]] name = "syn" version = "1.0.102" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3fcd952facd492f9be3ef0d0b7032a6e442ee9b361d4acc2b1d0c4aaa5f613a1" dependencies = [ "proc-macro2", "quote", "unicode-ident", ] [[package]] name = "syn" version = "2.0.87" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "25aa4ce346d03a6dcd68dd8b4010bcb74e54e62c90c573f394c46eae99aba32d" dependencies = [ "proc-macro2", "quote", "unicode-ident", ] [[package]] name = "system-deps" version = "6.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a3e535eb8dded36d55ec13eddacd30dec501792ff23a0b1682c38601b8cf2349" dependencies = [ "cfg-expr", "heck", "pkg-config", "toml", "version-compare", ] [[package]] name = "target-lexicon" version = "0.12.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e1fc403891a21bcfb7c37834ba66a547a8f402146eba7265b5a6d88059c9ff2f" [[package]] name = "thiserror" version = "1.0.59" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f0126ad08bff79f29fc3ae6a55cc72352056dfff61e3ff8bb7129476d44b23aa" dependencies = [ "thiserror-impl", ] [[package]] name = "thiserror-impl" version = "1.0.59" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d1cd413b5d558b4c5bf3680e324a6fa5014e7b7c067a51e69dbdf47eb7148b66" dependencies = [ "proc-macro2", "quote", "syn 2.0.87", ] [[package]] name = "threadpool" version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d050e60b33d41c19108b32cea32164033a9013fe3b46cbd4457559bfbf77afaa" dependencies = [ "num_cpus", ] [[package]] name = "tiff" version = "0.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ba1310fcea54c6a9a4fd1aad794ecc02c31682f6bfbecdf460bf19533eed1e3e" dependencies = [ "flate2", "jpeg-decoder", "weezl", ] [[package]] name = "tinytemplate" version = "1.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "be4d6b5f19ff7664e8c98d03e2139cb510db9b0a60b55f8e8709b689d939b6bc" dependencies = [ "serde", "serde_json", ] [[package]] name = "toml" version = "0.8.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e9dd1545e8208b4a5af1aa9bbd0b4cf7e9ea08fabc5d0a5c67fcaafa17433aa3" dependencies = [ "serde", "serde_spanned", "toml_datetime", "toml_edit", ] [[package]] name = "toml_datetime" version = "0.6.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3550f4e9685620ac18a50ed434eb3aec30db8ba93b0287467bca5826ea25baf1" dependencies = [ "serde", ] [[package]] name = "toml_edit" version = "0.22.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d3328d4f68a705b2a4498da1d580585d39a6510f98318a2cec3018a7ec61ddef" dependencies = [ "indexmap", "serde", "serde_spanned", "toml_datetime", "winnow", ] [[package]] name = "unicode-ident" version = "1.0.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6ceab39d59e4c9499d4e5a8ee0e2735b891bb7308ac83dfb4e80cad195c9f6f3" [[package]] name = "v_frame" version = "0.3.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d6f32aaa24bacd11e488aa9ba66369c7cd514885742c9fe08cfe85884db3e92b" dependencies = [ "aligned-vec", "num-traits", "wasm-bindgen", ] [[package]] name = "version-compare" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "852e951cb7832cb45cb1169900d19760cfa39b82bc0ea9c0e5a14ae88411c98b" [[package]] name = "version_check" version = "0.9.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f" [[package]] name = "walkdir" version = "2.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "808cf2735cd4b6866113f648b791c6adc5714537bc222d9347bb203386ffda56" dependencies = [ "same-file", "winapi", "winapi-util", ] [[package]] name = "wasi" version = "0.11.0+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" [[package]] name = "wasm-bindgen" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4be2531df63900aeb2bca0daaaddec08491ee64ceecbee5076636a3b026795a8" dependencies = [ "cfg-if", "wasm-bindgen-macro", ] [[package]] name = "wasm-bindgen-backend" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "614d787b966d3989fa7bb98a654e369c762374fd3213d212cfc0251257e747da" dependencies = [ "bumpalo", "log", "once_cell", "proc-macro2", "quote", "syn 2.0.87", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-macro" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a1f8823de937b71b9460c0c34e25f3da88250760bec0ebac694b49997550d726" dependencies = [ "quote", "wasm-bindgen-macro-support", ] [[package]] name = "wasm-bindgen-macro-support" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7" dependencies = [ "proc-macro2", "quote", "syn 2.0.87", "wasm-bindgen-backend", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-shared" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "af190c94f2773fdb3729c55b007a722abb5384da03bc0986df4c289bf5567e96" [[package]] name = "web-sys" version = "0.3.60" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bcda906d8be16e728fd5adc5b729afad4e444e106ab28cd1c7256e54fa61510f" dependencies = [ "js-sys", "wasm-bindgen", ] [[package]] name = "weezl" version = "0.1.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "53a85b86a771b1c87058196170769dd264f66c0782acf1ae6cc51bfd64b39082" [[package]] name = "winapi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" dependencies = [ "winapi-i686-pc-windows-gnu", "winapi-x86_64-pc-windows-gnu", ] [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" [[package]] name = "winapi-util" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178" dependencies = [ "winapi", ] [[package]] name = "winapi-x86_64-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" [[package]] name = "windows-sys" version = "0.52.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "282be5f36a8ce781fad8c8ae18fa3f9beff57ec1b52cb3de0789201425d9a33d" dependencies = [ "windows-targets", ] [[package]] name = "windows-targets" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6f0713a46559409d202e70e28227288446bf7841d3211583a4b53e3f6d96e7eb" dependencies = [ "windows_aarch64_gnullvm", "windows_aarch64_msvc", "windows_i686_gnu", "windows_i686_gnullvm", "windows_i686_msvc", "windows_x86_64_gnu", "windows_x86_64_gnullvm", "windows_x86_64_msvc", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7088eed71e8b8dda258ecc8bac5fb1153c5cffaf2578fc8ff5d61e23578d3263" [[package]] name = "windows_aarch64_msvc" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9985fd1504e250c615ca5f281c3f7a6da76213ebd5ccc9561496568a2752afb6" [[package]] name = "windows_i686_gnu" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "88ba073cf16d5372720ec942a8ccbf61626074c6d4dd2e745299726ce8b89670" [[package]] name = "windows_i686_gnullvm" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "87f4261229030a858f36b459e748ae97545d6f1ec60e5e0d6a3d32e0dc232ee9" [[package]] name = "windows_i686_msvc" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "db3c2bf3d13d5b658be73463284eaf12830ac9a26a90c717b7f771dfe97487bf" [[package]] name = "windows_x86_64_gnu" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4e4246f76bdeff09eb48875a0fd3e2af6aada79d409d33011886d3e1581517d9" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "852298e482cd67c356ddd9570386e2862b5673c85bd5f88df9ab6802b334c596" [[package]] name = "windows_x86_64_msvc" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bec47e5bfd1bff0eeaf6d8b485cc1074891a197ab4225d504cb7a1ab88b02bf0" [[package]] name = "winnow" version = "0.6.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "14b9415ee827af173ebb3f15f9083df5a122eb93572ec28741fb153356ea2578" dependencies = [ "memchr", ] [[package]] name = "zune-core" version = "0.4.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f423a2c17029964870cfaabb1f13dfab7d092a62a29a89264f4d36990ca414a" [[package]] name = "zune-jpeg" version = "0.4.13" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "16099418600b4d8f028622f73ff6e3deaabdff330fb9a2a131dea781ee8b0768" dependencies = [ "zune-core", ] image-0.25.5/Cargo.toml0000644000000075060000000000100102150ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2021" rust-version = "1.70.0" name = "image" version = "0.25.5" authors = ["The image-rs Developers"] build = false exclude = [ "src/png/testdata/*", "examples/*", "tests/*", ] include = [ "/LICENSE-APACHE", "/LICENSE-MIT", "/README.md", "/CHANGES.md", "/src/", "/benches/", ] autobins = false autoexamples = false autotests = false autobenches = false description = "Imaging library. Provides basic image processing and encoders/decoders for common image formats." homepage = "https://github.com/image-rs/image" documentation = "https://docs.rs/image" readme = "README.md" categories = [ "multimedia::images", "multimedia::encoding", "encoding", ] license = "MIT OR Apache-2.0" repository = "https://github.com/image-rs/image" resolver = "2" [package.metadata.docs.rs] all-features = true rustdoc-args = [ "--cfg", "docsrs", ] [lib] name = "image" path = "src/lib.rs" [[bench]] name = "blur" path = "benches/blur.rs" harness = false [[bench]] name = "copy_from" path = "benches/copy_from.rs" harness = false [[bench]] name = "decode" path = "benches/decode.rs" harness = false [[bench]] name = "encode" path = "benches/encode.rs" harness = false [[bench]] name = "fast_blur" path = "benches/fast_blur.rs" harness = false [dependencies.bytemuck] version = "1.8.0" features = ["extern_crate_alloc"] [dependencies.byteorder-lite] version = "0.1.0" [dependencies.color_quant] version = "1.1" optional = true [dependencies.dav1d] version = "0.10.3" optional = true [dependencies.exr] version = "1.5.0" optional = true [dependencies.gif] version = "0.13" optional = true [dependencies.image-webp] version = "0.2.0" optional = true [dependencies.mp4parse] version = "0.17.0" optional = true [dependencies.num-traits] version = "0.2.0" [dependencies.png] version = "0.17.6" optional = true [dependencies.qoi] version = "0.4" optional = true [dependencies.ravif] version = "0.11.11" optional = true default-features = false [dependencies.rayon] version = "1.7.0" optional = true [dependencies.rgb] version = "0.8.48" optional = true default-features = false [dependencies.serde] version = "1.0.214" features = ["derive"] optional = true [dependencies.tiff] version = "0.9.0" optional = true [dependencies.zune-core] version = "0.4.12" optional = true default-features = false [dependencies.zune-jpeg] version = "0.4.13" optional = true [dev-dependencies.crc32fast] version = "1.2.0" [dev-dependencies.criterion] version = "0.5.0" [dev-dependencies.glob] version = "0.3" [dev-dependencies.num-complex] version = "0.4" [dev-dependencies.quickcheck] version = "1" [features] avif = [ "dep:ravif", "dep:rgb", ] avif-native = [ "dep:mp4parse", "dep:dav1d", ] benchmarks = [] bmp = [] color_quant = ["dep:color_quant"] dds = [] default = [ "rayon", "default-formats", ] default-formats = [ "avif", "bmp", "dds", "exr", "ff", "gif", "hdr", "ico", "jpeg", "png", "pnm", "qoi", "tga", "tiff", "webp", ] exr = ["dep:exr"] ff = [] gif = [ "dep:gif", "dep:color_quant", ] hdr = [] ico = [ "bmp", "png", ] jpeg = [ "dep:zune-core", "dep:zune-jpeg", ] nasm = ["ravif?/asm"] png = ["dep:png"] pnm = [] qoi = ["dep:qoi"] rayon = [ "dep:rayon", "ravif?/threading", ] serde = ["dep:serde"] tga = [] tiff = ["dep:tiff"] webp = ["dep:image-webp"] image-0.25.5/Cargo.toml.orig000064400000000000000000000063371046102023000136770ustar 00000000000000[package] name = "image" version = "0.25.5" edition = "2021" resolver = "2" # note: when changed, also update test runner in `.github/workflows/rust.yml` rust-version = "1.70.0" license = "MIT OR Apache-2.0" description = "Imaging library. Provides basic image processing and encoders/decoders for common image formats." authors = ["The image-rs Developers"] readme = "README.md" # crates.io metadata documentation = "https://docs.rs/image" repository = "https://github.com/image-rs/image" homepage = "https://github.com/image-rs/image" categories = ["multimedia::images", "multimedia::encoding", "encoding"] # Crate build related exclude = ["src/png/testdata/*", "examples/*", "tests/*"] include = [ "/LICENSE-APACHE", "/LICENSE-MIT", "/README.md", "/CHANGES.md", "/src/", "/benches/", ] [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [dependencies] bytemuck = { version = "1.8.0", features = ["extern_crate_alloc"] } # includes cast_vec byteorder-lite = "0.1.0" num-traits = { version = "0.2.0" } # Optional dependencies color_quant = { version = "1.1", optional = true } dav1d = { version = "0.10.3", optional = true } exr = { version = "1.5.0", optional = true } gif = { version = "0.13", optional = true } image-webp = { version = "0.2.0", optional = true } mp4parse = { version = "0.17.0", optional = true } png = { version = "0.17.6", optional = true } qoi = { version = "0.4", optional = true } ravif = { version = "0.11.11", default-features = false, optional = true } rayon = { version = "1.7.0", optional = true } rgb = { version = "0.8.48", default-features = false, optional = true } tiff = { version = "0.9.0", optional = true } zune-core = { version = "0.4.12", default-features = false, optional = true } zune-jpeg = { version = "0.4.13", optional = true } serde = { version = "1.0.214", optional = true, features = ["derive"] } [dev-dependencies] crc32fast = "1.2.0" num-complex = "0.4" glob = "0.3" quickcheck = "1" criterion = "0.5.0" [features] default = ["rayon", "default-formats"] # Format features default-formats = ["avif", "bmp", "dds", "exr", "ff", "gif", "hdr", "ico", "jpeg", "png", "pnm", "qoi", "tga", "tiff", "webp"] avif = ["dep:ravif", "dep:rgb"] bmp = [] dds = [] exr = ["dep:exr"] ff = [] # Farbfeld image format gif = ["dep:gif", "dep:color_quant"] hdr = [] ico = ["bmp", "png"] jpeg = ["dep:zune-core", "dep:zune-jpeg"] png = ["dep:png"] pnm = [] qoi = ["dep:qoi"] tga = [] tiff = ["dep:tiff"] webp = ["dep:image-webp"] # Other features rayon = ["dep:rayon", "ravif?/threading"] # Enables multi-threading nasm = ["ravif?/asm"] # Enables use of nasm by rav1e (requires nasm to be installed) color_quant = ["dep:color_quant"] # Enables color quantization avif-native = ["dep:mp4parse", "dep:dav1d"] # Enable native dependency libdav1d benchmarks = [] # Build some inline benchmarks. Useful only during development (requires nightly Rust) serde = ["dep:serde"] [[bench]] path = "benches/decode.rs" name = "decode" harness = false [[bench]] path = "benches/encode.rs" name = "encode" harness = false [[bench]] name = "copy_from" harness = false [[bench]] path = "benches/fast_blur.rs" name = "fast_blur" harness = false [[bench]] path = "benches/blur.rs" name = "blur" harness = false image-0.25.5/LICENSE-APACHE000064400000000000000000000236761046102023000127410ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS image-0.25.5/LICENSE-MIT000064400000000000000000000020141046102023000124300ustar 00000000000000MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. image-0.25.5/README.md000064400000000000000000000223551046102023000122650ustar 00000000000000# Image [![crates.io](https://img.shields.io/crates/v/image.svg)](https://crates.io/crates/image) [![Documentation](https://docs.rs/image/badge.svg)](https://docs.rs/image) [![Build Status](https://github.com/image-rs/image/workflows/Rust%20CI/badge.svg)](https://github.com/image-rs/image/actions) [![Gitter](https://badges.gitter.im/image-rs/image.svg)](https://gitter.im/image-rs/image?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) Maintainers: [@HeroicKatora](https://github.com/HeroicKatora), [@fintelia](https://github.com/fintelia) [How to contribute](https://github.com/image-rs/organization/blob/master/CONTRIBUTING.md) ## An Image Processing Library This crate provides basic image processing functions and methods for converting to and from various image formats. All image processing functions provided operate on types that implement the `GenericImageView` and `GenericImage` traits and return an `ImageBuffer`. ## High level API Load images using [`ImageReader`]: ```rust,ignore use std::io::Cursor; use image::ImageReader; let img = ImageReader::open("myimage.png")?.decode()?; let img2 = ImageReader::new(Cursor::new(bytes)).with_guessed_format()?.decode()?; ``` And save them using [`save`] or [`write_to`] methods: ```rust,ignore img.save("empty.jpg")?; let mut bytes: Vec = Vec::new(); img2.write_to(&mut Cursor::new(&mut bytes), image::ImageFormat::Png)?; ``` ## Supported Image Formats With default features enabled, `image` provides implementations of many common image format encoders and decoders. | Format | Decoding | Encoding | | -------- | ----------------------------------------- | --------------------------------------- | | AVIF | Yes \* | Yes (lossy only) | | BMP | Yes | Yes | | DDS | Yes | --- | | Farbfeld | Yes | Yes | | GIF | Yes | Yes | | HDR | Yes | Yes | | ICO | Yes | Yes | | JPEG | Yes | Yes | | EXR | Yes | Yes | | PNG | Yes | Yes | | PNM | Yes | Yes | | QOI | Yes | Yes | | TGA | Yes | Yes | | TIFF | Yes | Yes | | WebP | Yes | Yes (lossless only) | - \* Requires the `avif-native` feature, uses the libdav1d C library. ## Image Types This crate provides a number of different types for representing images. Individual pixels within images are indexed with (0,0) at the top left corner. ### [`ImageBuffer`](https://docs.rs/image/*/image/struct.ImageBuffer.html) An image parameterised by its Pixel type, represented by a width and height and a vector of pixels. It provides direct access to its pixels and implements the `GenericImageView` and `GenericImage` traits. ### [`DynamicImage`](https://docs.rs/image/*/image/enum.DynamicImage.html) A `DynamicImage` is an enumeration over all supported `ImageBuffer

` types. Its exact image type is determined at runtime. It is the type returned when opening an image. For convenience `DynamicImage` reimplements all image processing functions. ### The [`GenericImageView`](https://docs.rs/image/*/image/trait.GenericImageView.html) and [`GenericImage`](https://docs.rs/image/*/image/trait.GenericImage.html) Traits Traits that provide methods for inspecting (`GenericImageView`) and manipulating (`GenericImage`) images, parameterised over the image's pixel type. ### [`SubImage`](https://docs.rs/image/*/image/struct.SubImage.html) A view into another image, delimited by the coordinates of a rectangle. The coordinates given set the position of the top left corner of the rectangle. This is used to perform image processing functions on a subregion of an image. ## The [`ImageDecoder`](https://docs.rs/image/*/image/trait.ImageDecoder.html) and [`ImageDecoderRect`](https://docs.rs/image/*/image/trait.ImageDecoderRect.html) Traits All image format decoders implement the `ImageDecoder` trait which provide basic methods for getting image metadata and decoding images. Some formats additionally provide `ImageDecoderRect` implementations which allow for decoding only part of an image at once. The most important methods for decoders are... + **dimensions**: Return a tuple containing the width and height of the image. + **color_type**: Return the color type of the image data produced by this decoder. + **read_image**: Decode the entire image into a slice of bytes. ## Pixels `image` provides the following pixel types: + **Rgb**: RGB pixel + **Rgba**: RGB with alpha (RGBA pixel) + **Luma**: Grayscale pixel + **LumaA**: Grayscale with alpha All pixels are parameterised by their component type. ## Image Processing Functions These are the functions defined in the `imageops` module. All functions operate on types that implement the `GenericImage` trait. Note that some of the functions are very slow in debug mode. Make sure to use release mode if you experience any performance issues. + **blur**: Performs a Gaussian blur on the supplied image. + **brighten**: Brighten the supplied image. + **huerotate**: Hue rotate the supplied image by degrees. + **contrast**: Adjust the contrast of the supplied image. + **crop**: Return a mutable view into an image. + **filter3x3**: Perform a 3x3 box filter on the supplied image. + **flip_horizontal**: Flip an image horizontally. + **flip_vertical**: Flip an image vertically. + **grayscale**: Convert the supplied image to grayscale. + **invert**: Invert each pixel within the supplied image This function operates in place. + **resize**: Resize the supplied image to the specified dimensions. + **rotate180**: Rotate an image 180 degrees clockwise. + **rotate270**: Rotate an image 270 degrees clockwise. + **rotate90**: Rotate an image 90 degrees clockwise. + **unsharpen**: Performs an unsharpen mask on the supplied image. For more options, see the [`imageproc`](https://crates.io/crates/imageproc) crate. ## Examples ### Opening and Saving Images `image` provides the `open` function for opening images from a path. The image format is determined from the path's file extension. An `io` module provides a reader which offer some more control. ```rust,no_run use image::GenericImageView; // Use the open function to load an image from a Path. // `open` returns a `DynamicImage` on success. let img = image::open("tests/images/jpg/progressive/cat.jpg").unwrap(); // The dimensions method returns the images width and height. println!("dimensions {:?}", img.dimensions()); // The color method returns the image's `ColorType`. println!("{:?}", img.color()); // Write the contents of this image to the Writer in PNG format. img.save("test.png").unwrap(); ``` ### Generating Fractals ```rust,no_run //! An example of generating julia fractals. let imgx = 800; let imgy = 800; let scalex = 3.0 / imgx as f32; let scaley = 3.0 / imgy as f32; // Create a new ImgBuf with width: imgx and height: imgy let mut imgbuf = image::ImageBuffer::new(imgx, imgy); // Iterate over the coordinates and pixels of the image for (x, y, pixel) in imgbuf.enumerate_pixels_mut() { let r = (0.3 * x as f32) as u8; let b = (0.3 * y as f32) as u8; *pixel = image::Rgb([r, 0, b]); } // A redundant loop to demonstrate reading image data for x in 0..imgx { for y in 0..imgy { let cx = y as f32 * scalex - 1.5; let cy = x as f32 * scaley - 1.5; let c = num_complex::Complex::new(-0.4, 0.6); let mut z = num_complex::Complex::new(cx, cy); let mut i = 0; while i < 255 && z.norm() <= 2.0 { z = z * z + c; i += 1; } let pixel = imgbuf.get_pixel_mut(x, y); let image::Rgb(data) = *pixel; *pixel = image::Rgb([data[0], i as u8, data[2]]); } } // Save the image as “fractal.png”, the format is deduced from the path imgbuf.save("fractal.png").unwrap(); ``` Example output: A Julia Fractal, c: -0.4 + 0.6i ### Writing raw buffers If the high level interface is not needed because the image was obtained by other means, `image` provides the function `save_buffer` to save a buffer to a file. ```rust,no_run let buffer: &[u8] = unimplemented!(); // Generate the image data // Save the buffer as "image.png" image::save_buffer("image.png", buffer, 800, 600, image::ExtendedColorType::Rgb8).unwrap() ``` image-0.25.5/benches/README.md000064400000000000000000000002461046102023000136670ustar 00000000000000# Getting started with benchmarking To run the benchmarks you need a nightly rust toolchain. Then you launch it with cargo +nightly bench --features=benchmarks image-0.25.5/benches/blur.rs000064400000000000000000000005711046102023000137230ustar 00000000000000use criterion::{criterion_group, criterion_main, Criterion}; use image::{imageops::blur, ImageBuffer, Rgb}; pub fn bench_fast_blur(c: &mut Criterion) { let src = ImageBuffer::from_pixel(1024, 768, Rgb([255u8, 0, 0])); c.bench_function("blur", |b| { b.iter(|| blur(&src, 50.0)); }); } criterion_group!(benches, bench_fast_blur); criterion_main!(benches); image-0.25.5/benches/copy_from.rs000064400000000000000000000007621046102023000147560ustar 00000000000000use criterion::{black_box, criterion_group, criterion_main, Criterion}; use image::{GenericImage, ImageBuffer, Rgba}; pub fn bench_copy_from(c: &mut Criterion) { let src = ImageBuffer::from_pixel(2048, 2048, Rgba([255u8, 0, 0, 255])); let mut dst = ImageBuffer::from_pixel(2048, 2048, Rgba([0u8, 0, 0, 255])); c.bench_function("copy_from", |b| { b.iter(|| dst.copy_from(black_box(&src), 0, 0)); }); } criterion_group!(benches, bench_copy_from); criterion_main!(benches); image-0.25.5/benches/decode.rs000064400000000000000000000056741046102023000142130ustar 00000000000000use std::{fs, iter, path}; use criterion::{criterion_group, criterion_main, Criterion}; use image::ImageFormat; #[derive(Clone, Copy)] struct BenchDef { dir: &'static [&'static str], files: &'static [&'static str], format: ImageFormat, } fn load_all(c: &mut Criterion) { const BENCH_DEFS: &[BenchDef] = &[ BenchDef { dir: &["bmp", "images"], files: &[ "Core_1_Bit.bmp", "Core_4_Bit.bmp", "Core_8_Bit.bmp", "rgb16.bmp", "rgb24.bmp", "rgb32.bmp", "pal4rle.bmp", "pal8rle.bmp", "rgb16-565.bmp", "rgb32bf.bmp", ], format: ImageFormat::Bmp, }, BenchDef { dir: &["gif", "simple"], files: &["alpha_gif_a.gif", "sample_1.gif"], format: ImageFormat::Gif, }, BenchDef { dir: &["hdr", "images"], files: &["image1.hdr", "rgbr4x4.hdr"], format: ImageFormat::Hdr, }, BenchDef { dir: &["ico", "images"], files: &[ "bmp-24bpp-mask.ico", "bmp-32bpp-alpha.ico", "png-32bpp-alpha.ico", "smile.ico", ], format: ImageFormat::Ico, }, BenchDef { dir: &["jpg", "progressive"], files: &["3.jpg", "cat.jpg", "test.jpg"], format: ImageFormat::Jpeg, }, // TODO: pnm // TODO: png BenchDef { dir: &["tga", "testsuite"], files: &["cbw8.tga", "ctc24.tga", "ubw8.tga", "utc24.tga"], format: ImageFormat::Tga, }, BenchDef { dir: &["tiff", "testsuite"], files: &[ "hpredict.tiff", "hpredict_packbits.tiff", "mandrill.tiff", "rgb-3c-16b.tiff", ], format: ImageFormat::Tiff, }, BenchDef { dir: &["webp", "lossy_images"], files: &["simple-gray.webp", "simple-rgb.webp"], format: ImageFormat::WebP, }, ]; for bench in BENCH_DEFS { bench_load(c, bench); } } criterion_group!(benches, load_all); criterion_main!(benches); fn bench_load(c: &mut Criterion, def: &BenchDef) { let group_name = format!("load-{:?}", def.format); let mut group = c.benchmark_group(&group_name); let paths = IMAGE_DIR.iter().chain(def.dir); for file_name in def.files { let path: path::PathBuf = paths.clone().chain(iter::once(file_name)).collect(); let buf = fs::read(path).unwrap(); group.bench_function(file_name.to_owned(), |b| { b.iter(|| { image::load_from_memory_with_format(&buf, def.format).unwrap(); }); }); } } const IMAGE_DIR: [&str; 3] = [".", "tests", "images"]; image-0.25.5/benches/encode.rs000064400000000000000000000102031046102023000142050ustar 00000000000000extern crate criterion; use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion}; use image::ExtendedColorType; use image::{codecs::bmp::BmpEncoder, codecs::jpeg::JpegEncoder, ColorType}; use std::fs::File; use std::io::{BufWriter, Seek, SeekFrom, Write}; trait Encoder { fn encode_raw(&self, into: &mut Vec, im: &[u8], dims: u32, color: ExtendedColorType); fn encode_bufvec(&self, into: &mut Vec, im: &[u8], dims: u32, color: ExtendedColorType); fn encode_file(&self, file: &File, im: &[u8], dims: u32, color: ExtendedColorType); } #[derive(Clone, Copy)] struct BenchDef { with: &'static dyn Encoder, name: &'static str, sizes: &'static [u32], colors: &'static [ColorType], } fn encode_all(c: &mut Criterion) { const BENCH_DEFS: &[BenchDef] = &[ BenchDef { with: &Bmp, name: "bmp", sizes: &[100u32, 200, 400], colors: &[ColorType::L8, ColorType::Rgb8, ColorType::Rgba8], }, BenchDef { with: &Jpeg, name: "jpeg", sizes: &[64u32, 128, 256], colors: &[ColorType::L8, ColorType::Rgb8, ColorType::Rgba8], }, ]; for definition in BENCH_DEFS { encode_definition(c, definition); } } criterion_group!(benches, encode_all); criterion_main!(benches); type BenchGroup<'a> = criterion::BenchmarkGroup<'a, criterion::measurement::WallTime>; /// Benchmarks encoding a zeroed image. /// /// For compressed formats this is surely not representative of encoding a normal image but it's a /// start for benchmarking. fn encode_zeroed(group: &mut BenchGroup, with: &dyn Encoder, size: u32, color: ExtendedColorType) { let im = vec![0; (color.bits_per_pixel() as usize * size as usize + 7) / 8 * size as usize]; group.bench_with_input( BenchmarkId::new(format!("zero-{color:?}-rawvec"), size), &im, |b, image| { let mut v = vec![]; with.encode_raw(&mut v, &im, size, color); b.iter(|| with.encode_raw(&mut v, image, size, color)); }, ); group.bench_with_input( BenchmarkId::new(format!("zero-{color:?}-bufvec"), size), &im, |b, image| { let mut v = vec![]; with.encode_raw(&mut v, &im, size, color); b.iter(|| with.encode_bufvec(&mut v, image, size, color)); }, ); group.bench_with_input( BenchmarkId::new(format!("zero-{color:?}-file"), size), &im, |b, image| { let file = File::create("temp.bmp").unwrap(); b.iter(|| with.encode_file(&file, image, size, color)); }, ); } fn encode_definition(criterion: &mut Criterion, def: &BenchDef) { let mut group = criterion.benchmark_group(format!("encode-{}", def.name)); for &color in def.colors { for &size in def.sizes { encode_zeroed(&mut group, def.with, size, color.into()); } } } struct Bmp; struct Jpeg; trait EncoderBase { fn encode(&self, into: impl Write, im: &[u8], dims: u32, color: ExtendedColorType); } impl Encoder for T { fn encode_raw(&self, into: &mut Vec, im: &[u8], dims: u32, color: ExtendedColorType) { into.clear(); self.encode(into, im, dims, color); } fn encode_bufvec(&self, into: &mut Vec, im: &[u8], dims: u32, color: ExtendedColorType) { into.clear(); let buf = BufWriter::new(into); self.encode(buf, im, dims, color); } fn encode_file(&self, mut file: &File, im: &[u8], dims: u32, color: ExtendedColorType) { file.seek(SeekFrom::Start(0)).unwrap(); let buf = BufWriter::new(file); self.encode(buf, im, dims, color); } } impl EncoderBase for Bmp { fn encode(&self, mut into: impl Write, im: &[u8], size: u32, color: ExtendedColorType) { let mut x = BmpEncoder::new(&mut into); x.encode(im, size, size, color).unwrap(); } } impl EncoderBase for Jpeg { fn encode(&self, mut into: impl Write, im: &[u8], size: u32, color: ExtendedColorType) { let mut x = JpegEncoder::new(&mut into); x.encode(im, size, size, color).unwrap(); } } image-0.25.5/benches/fast_blur.rs000064400000000000000000000006101046102023000147320ustar 00000000000000use criterion::{criterion_group, criterion_main, Criterion}; use image::{imageops::fast_blur, ImageBuffer, Rgb}; pub fn bench_fast_blur(c: &mut Criterion) { let src = ImageBuffer::from_pixel(1024, 768, Rgb([255u8, 0, 0])); c.bench_function("fast_blur", |b| { b.iter(|| fast_blur(&src, 50.0)); }); } criterion_group!(benches, bench_fast_blur); criterion_main!(benches); image-0.25.5/src/animation.rs000064400000000000000000000316011046102023000141140ustar 00000000000000use std::cmp::Ordering; use std::time::Duration; use crate::error::ImageResult; use crate::RgbaImage; /// An implementation dependent iterator, reading the frames as requested pub struct Frames<'a> { iterator: Box> + 'a>, } impl<'a> Frames<'a> { /// Creates a new `Frames` from an implementation specific iterator. #[must_use] pub fn new(iterator: Box> + 'a>) -> Self { Frames { iterator } } /// Steps through the iterator from the current frame until the end and pushes each frame into /// a `Vec`. /// If en error is encountered that error is returned instead. /// /// Note: This is equivalent to `Frames::collect::>>()` pub fn collect_frames(self) -> ImageResult> { self.collect() } } impl Iterator for Frames<'_> { type Item = ImageResult; fn next(&mut self) -> Option> { self.iterator.next() } } /// A single animation frame pub struct Frame { /// Delay between the frames in milliseconds delay: Delay, /// x offset left: u32, /// y offset top: u32, buffer: RgbaImage, } impl Clone for Frame { fn clone(&self) -> Self { Self { delay: self.delay, left: self.left, top: self.top, buffer: self.buffer.clone(), } } fn clone_from(&mut self, source: &Self) { self.delay = source.delay; self.left = source.left; self.top = source.top; self.buffer.clone_from(&source.buffer); } } /// The delay of a frame relative to the previous one. #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd)] pub struct Delay { ratio: Ratio, } impl Frame { /// Constructs a new frame without any delay. #[must_use] pub fn new(buffer: RgbaImage) -> Frame { Frame { delay: Delay::from_ratio(Ratio { numer: 0, denom: 1 }), left: 0, top: 0, buffer, } } /// Constructs a new frame #[must_use] pub fn from_parts(buffer: RgbaImage, left: u32, top: u32, delay: Delay) -> Frame { Frame { delay, left, top, buffer, } } /// Delay of this frame #[must_use] pub fn delay(&self) -> Delay { self.delay } /// Returns the image buffer #[must_use] pub fn buffer(&self) -> &RgbaImage { &self.buffer } /// Returns a mutable image buffer pub fn buffer_mut(&mut self) -> &mut RgbaImage { &mut self.buffer } /// Returns the image buffer #[must_use] pub fn into_buffer(self) -> RgbaImage { self.buffer } /// Returns the x offset #[must_use] pub fn left(&self) -> u32 { self.left } /// Returns the y offset #[must_use] pub fn top(&self) -> u32 { self.top } } impl Delay { /// Create a delay from a ratio of milliseconds. /// /// # Examples /// /// ``` /// use image::Delay; /// let delay_10ms = Delay::from_numer_denom_ms(10, 1); /// ``` #[must_use] pub fn from_numer_denom_ms(numerator: u32, denominator: u32) -> Self { Delay { ratio: Ratio::new(numerator, denominator), } } /// Convert from a duration, clamped between 0 and an implemented defined maximum. /// /// The maximum is *at least* `i32::MAX` milliseconds. It should be noted that the accuracy of /// the result may be relative and very large delays have a coarse resolution. /// /// # Examples /// /// ``` /// use std::time::Duration; /// use image::Delay; /// /// let duration = Duration::from_millis(20); /// let delay = Delay::from_saturating_duration(duration); /// ``` #[must_use] pub fn from_saturating_duration(duration: Duration) -> Self { // A few notes: The largest number we can represent as a ratio is u32::MAX but we can // sometimes represent much smaller numbers. // // We can represent duration as `millis+a/b` (where a < b, b > 0). // We must thus bound b with `b·millis + (b-1) <= u32::MAX` or // > `0 < b <= (u32::MAX + 1)/(millis + 1)` // Corollary: millis <= u32::MAX const MILLIS_BOUND: u128 = u32::MAX as u128; let millis = duration.as_millis().min(MILLIS_BOUND); let submillis = (duration.as_nanos() % 1_000_000) as u32; let max_b = if millis > 0 { ((MILLIS_BOUND + 1) / (millis + 1)) as u32 } else { MILLIS_BOUND as u32 }; let millis = millis as u32; let (a, b) = Self::closest_bounded_fraction(max_b, submillis, 1_000_000); Self::from_numer_denom_ms(a + b * millis, b) } /// The numerator and denominator of the delay in milliseconds. /// /// This is guaranteed to be an exact conversion if the `Delay` was previously created with the /// `from_numer_denom_ms` constructor. #[must_use] pub fn numer_denom_ms(self) -> (u32, u32) { (self.ratio.numer, self.ratio.denom) } pub(crate) fn from_ratio(ratio: Ratio) -> Self { Delay { ratio } } pub(crate) fn into_ratio(self) -> Ratio { self.ratio } /// Given some fraction, compute an approximation with denominator bounded. /// /// Note that `denom_bound` bounds nominator and denominator of all intermediate /// approximations and the end result. fn closest_bounded_fraction(denom_bound: u32, nom: u32, denom: u32) -> (u32, u32) { use std::cmp::Ordering::*; assert!(0 < denom); assert!(0 < denom_bound); assert!(nom < denom); // Avoid a few type troubles. All intermediate results are bounded by `denom_bound` which // is in turn bounded by u32::MAX. Representing with u64 allows multiplication of any two // values without fears of overflow. // Compare two fractions whose parts fit into a u32. fn compare_fraction((an, ad): (u64, u64), (bn, bd): (u64, u64)) -> Ordering { (an * bd).cmp(&(bn * ad)) } // Computes the nominator of the absolute difference between two such fractions. fn abs_diff_nom((an, ad): (u64, u64), (bn, bd): (u64, u64)) -> u64 { let c0 = an * bd; let c1 = ad * bn; let d0 = c0.max(c1); let d1 = c0.min(c1); d0 - d1 } let exact = (u64::from(nom), u64::from(denom)); // The lower bound fraction, numerator and denominator. let mut lower = (0u64, 1u64); // The upper bound fraction, numerator and denominator. let mut upper = (1u64, 1u64); // The closest approximation for now. let mut guess = (u64::from(nom * 2 > denom), 1u64); // loop invariant: ad, bd <= denom_bound // iterates the Farey sequence. loop { // Break if we are done. if compare_fraction(guess, exact) == Equal { break; } // Break if next Farey number is out-of-range. if u64::from(denom_bound) - lower.1 < upper.1 { break; } // Next Farey approximation n between a and b let next = (lower.0 + upper.0, lower.1 + upper.1); // if F < n then replace the upper bound, else replace lower. if compare_fraction(exact, next) == Less { upper = next; } else { lower = next; } // Now correct the closest guess. // In other words, if |c - f| > |n - f| then replace it with the new guess. // This favors the guess with smaller denominator on equality. // |g - f| = |g_diff_nom|/(gd*fd); let g_diff_nom = abs_diff_nom(guess, exact); // |n - f| = |n_diff_nom|/(nd*fd); let n_diff_nom = abs_diff_nom(next, exact); // The difference |n - f| is smaller than |g - f| if either the integral part of the // fraction |n_diff_nom|/nd is smaller than the one of |g_diff_nom|/gd or if they are // the same but the fractional part is larger. if match (n_diff_nom / next.1).cmp(&(g_diff_nom / guess.1)) { Less => true, Greater => false, // Note that the nominator for the fractional part is smaller than its denominator // which is smaller than u32 and can't overflow the multiplication with the other // denominator, that is we can compare these fractions by multiplication with the // respective other denominator. Equal => { compare_fraction( (n_diff_nom % next.1, next.1), (g_diff_nom % guess.1, guess.1), ) == Less } } { guess = next; } } (guess.0 as u32, guess.1 as u32) } } impl From for Duration { fn from(delay: Delay) -> Self { let ratio = delay.into_ratio(); let ms = ratio.to_integer(); let rest = ratio.numer % ratio.denom; let nanos = (u64::from(rest) * 1_000_000) / u64::from(ratio.denom); Duration::from_millis(ms.into()) + Duration::from_nanos(nanos) } } #[derive(Copy, Clone, Debug)] pub(crate) struct Ratio { numer: u32, denom: u32, } impl Ratio { #[inline] pub(crate) fn new(numerator: u32, denominator: u32) -> Self { assert_ne!(denominator, 0); Self { numer: numerator, denom: denominator, } } #[inline] pub(crate) fn to_integer(self) -> u32 { self.numer / self.denom } } impl PartialEq for Ratio { fn eq(&self, other: &Self) -> bool { self.cmp(other) == Ordering::Equal } } impl Eq for Ratio {} impl PartialOrd for Ratio { fn partial_cmp(&self, other: &Self) -> Option { Some(self.cmp(other)) } } impl Ord for Ratio { fn cmp(&self, other: &Self) -> Ordering { // The following comparison can be simplified: // a / b c / d // We multiply both sides by `b`: // a c * b / d // We multiply both sides by `d`: // a * d c * b let a: u32 = self.numer; let b: u32 = self.denom; let c: u32 = other.numer; let d: u32 = other.denom; // We cast the types from `u32` to `u64` in order // to not overflow the multiplications. (a as u64 * d as u64).cmp(&(c as u64 * b as u64)) } } #[cfg(test)] mod tests { use super::{Delay, Duration, Ratio}; #[test] fn simple() { let second = Delay::from_numer_denom_ms(1000, 1); assert_eq!(Duration::from(second), Duration::from_secs(1)); } #[test] fn fps_30() { let thirtieth = Delay::from_numer_denom_ms(1000, 30); let duration = Duration::from(thirtieth); assert_eq!(duration.as_secs(), 0); assert_eq!(duration.subsec_millis(), 33); assert_eq!(duration.subsec_nanos(), 33_333_333); } #[test] fn duration_outlier() { let oob = Duration::from_secs(0xFFFF_FFFF); let delay = Delay::from_saturating_duration(oob); assert_eq!(delay.numer_denom_ms(), (0xFFFF_FFFF, 1)); } #[test] fn duration_approx() { let oob = Duration::from_millis(0xFFFF_FFFF) + Duration::from_micros(1); let delay = Delay::from_saturating_duration(oob); assert_eq!(delay.numer_denom_ms(), (0xFFFF_FFFF, 1)); let inbounds = Duration::from_millis(0xFFFF_FFFF) - Duration::from_micros(1); let delay = Delay::from_saturating_duration(inbounds); assert_eq!(delay.numer_denom_ms(), (0xFFFF_FFFF, 1)); let fine = Duration::from_millis(0xFFFF_FFFF / 1000) + Duration::from_micros(0xFFFF_FFFF % 1000); let delay = Delay::from_saturating_duration(fine); // Funnily, 0xFFFF_FFFF is divisble by 5, thus we compare with a `Ratio`. assert_eq!(delay.into_ratio(), Ratio::new(0xFFFF_FFFF, 1000)); } #[test] fn precise() { // The ratio has only 32 bits in the numerator, too imprecise to get more than 11 digits // correct. But it may be expressed as 1_000_000/3 instead. let exceed = Duration::from_secs(333) + Duration::from_nanos(333_333_333); let delay = Delay::from_saturating_duration(exceed); assert_eq!(Duration::from(delay), exceed); } #[test] fn small() { // Not quite a delay of `1 ms`. let delay = Delay::from_numer_denom_ms(1 << 16, (1 << 16) + 1); let duration = Duration::from(delay); assert_eq!(duration.as_millis(), 0); // Not precisely the original but should be smaller than 0. let delay = Delay::from_saturating_duration(duration); assert_eq!(delay.into_ratio().to_integer(), 0); } } image-0.25.5/src/buffer.rs000064400000000000000000001531121046102023000134100ustar 00000000000000//! Contains the generic `ImageBuffer` struct. use num_traits::Zero; use std::fmt; use std::marker::PhantomData; use std::ops::{Deref, DerefMut, Index, IndexMut, Range}; use std::path::Path; use std::slice::{ChunksExact, ChunksExactMut}; use crate::color::{FromColor, Luma, LumaA, Rgb, Rgba}; use crate::dynimage::{save_buffer, save_buffer_with_format, write_buffer_with_format}; use crate::error::ImageResult; use crate::flat::{FlatSamples, SampleLayout}; use crate::image::{GenericImage, GenericImageView, ImageEncoder, ImageFormat}; use crate::math::Rect; use crate::traits::{EncodableLayout, Pixel, PixelWithColorType}; use crate::utils::expand_packed; use crate::DynamicImage; /// Iterate over pixel refs. pub struct Pixels<'a, P: Pixel + 'a> where P::Subpixel: 'a, { chunks: ChunksExact<'a, P::Subpixel>, } impl<'a, P: Pixel + 'a> Iterator for Pixels<'a, P> where P::Subpixel: 'a, { type Item = &'a P; #[inline(always)] fn next(&mut self) -> Option<&'a P> { self.chunks.next().map(|v|

::from_slice(v)) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for Pixels<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.chunks.len() } } impl<'a, P: Pixel + 'a> DoubleEndedIterator for Pixels<'a, P> where P::Subpixel: 'a, { #[inline(always)] fn next_back(&mut self) -> Option<&'a P> { self.chunks.next_back().map(|v|

::from_slice(v)) } } impl Clone for Pixels<'_, P> { fn clone(&self) -> Self { Pixels { chunks: self.chunks.clone(), } } } impl fmt::Debug for Pixels<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("Pixels") .field("chunks", &self.chunks) .finish() } } /// Iterate over mutable pixel refs. pub struct PixelsMut<'a, P: Pixel + 'a> where P::Subpixel: 'a, { chunks: ChunksExactMut<'a, P::Subpixel>, } impl<'a, P: Pixel + 'a> Iterator for PixelsMut<'a, P> where P::Subpixel: 'a, { type Item = &'a mut P; #[inline(always)] fn next(&mut self) -> Option<&'a mut P> { self.chunks.next().map(|v|

::from_slice_mut(v)) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for PixelsMut<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.chunks.len() } } impl<'a, P: Pixel + 'a> DoubleEndedIterator for PixelsMut<'a, P> where P::Subpixel: 'a, { #[inline(always)] fn next_back(&mut self) -> Option<&'a mut P> { self.chunks .next_back() .map(|v|

::from_slice_mut(v)) } } impl fmt::Debug for PixelsMut<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("PixelsMut") .field("chunks", &self.chunks) .finish() } } /// Iterate over rows of an image /// /// This iterator is created with [`ImageBuffer::rows`]. See its document for details. /// /// [`ImageBuffer::rows`]: ../struct.ImageBuffer.html#method.rows pub struct Rows<'a, P: Pixel + 'a> where

::Subpixel: 'a, { pixels: ChunksExact<'a, P::Subpixel>, } impl<'a, P: Pixel + 'a> Rows<'a, P> { /// Construct the iterator from image pixels. This is not public since it has a (hidden) panic /// condition. The `pixels` slice must be large enough so that all pixels are addressable. fn with_image(pixels: &'a [P::Subpixel], width: u32, height: u32) -> Self { let row_len = (width as usize) * usize::from(

::CHANNEL_COUNT); if row_len == 0 { Rows { pixels: [].chunks_exact(1), } } else { let pixels = pixels .get(..row_len * height as usize) .expect("Pixel buffer has too few subpixels"); // Rows are physically present. In particular, height is smaller than `usize::MAX` as // all subpixels can be indexed. Rows { pixels: pixels.chunks_exact(row_len), } } } } impl<'a, P: Pixel + 'a> Iterator for Rows<'a, P> where P::Subpixel: 'a, { type Item = Pixels<'a, P>; #[inline(always)] fn next(&mut self) -> Option> { let row = self.pixels.next()?; Some(Pixels { // Note: this is not reached when CHANNEL_COUNT is 0. chunks: row.chunks_exact(

::CHANNEL_COUNT as usize), }) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for Rows<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.pixels.len() } } impl<'a, P: Pixel + 'a> DoubleEndedIterator for Rows<'a, P> where P::Subpixel: 'a, { #[inline(always)] fn next_back(&mut self) -> Option> { let row = self.pixels.next_back()?; Some(Pixels { // Note: this is not reached when CHANNEL_COUNT is 0. chunks: row.chunks_exact(

::CHANNEL_COUNT as usize), }) } } impl Clone for Rows<'_, P> { fn clone(&self) -> Self { Rows { pixels: self.pixels.clone(), } } } impl fmt::Debug for Rows<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("Rows") .field("pixels", &self.pixels) .finish() } } /// Iterate over mutable rows of an image /// /// This iterator is created with [`ImageBuffer::rows_mut`]. See its document for details. /// /// [`ImageBuffer::rows_mut`]: ../struct.ImageBuffer.html#method.rows_mut pub struct RowsMut<'a, P: Pixel + 'a> where

::Subpixel: 'a, { pixels: ChunksExactMut<'a, P::Subpixel>, } impl<'a, P: Pixel + 'a> RowsMut<'a, P> { /// Construct the iterator from image pixels. This is not public since it has a (hidden) panic /// condition. The `pixels` slice must be large enough so that all pixels are addressable. fn with_image(pixels: &'a mut [P::Subpixel], width: u32, height: u32) -> Self { let row_len = (width as usize) * usize::from(

::CHANNEL_COUNT); if row_len == 0 { RowsMut { pixels: [].chunks_exact_mut(1), } } else { let pixels = pixels .get_mut(..row_len * height as usize) .expect("Pixel buffer has too few subpixels"); // Rows are physically present. In particular, height is smaller than `usize::MAX` as // all subpixels can be indexed. RowsMut { pixels: pixels.chunks_exact_mut(row_len), } } } } impl<'a, P: Pixel + 'a> Iterator for RowsMut<'a, P> where P::Subpixel: 'a, { type Item = PixelsMut<'a, P>; #[inline(always)] fn next(&mut self) -> Option> { let row = self.pixels.next()?; Some(PixelsMut { // Note: this is not reached when CHANNEL_COUNT is 0. chunks: row.chunks_exact_mut(

::CHANNEL_COUNT as usize), }) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for RowsMut<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.pixels.len() } } impl<'a, P: Pixel + 'a> DoubleEndedIterator for RowsMut<'a, P> where P::Subpixel: 'a, { #[inline(always)] fn next_back(&mut self) -> Option> { let row = self.pixels.next_back()?; Some(PixelsMut { // Note: this is not reached when CHANNEL_COUNT is 0. chunks: row.chunks_exact_mut(

::CHANNEL_COUNT as usize), }) } } impl fmt::Debug for RowsMut<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("RowsMut") .field("pixels", &self.pixels) .finish() } } /// Enumerate the pixels of an image. pub struct EnumeratePixels<'a, P: Pixel + 'a> where

::Subpixel: 'a, { pixels: Pixels<'a, P>, x: u32, y: u32, width: u32, } impl<'a, P: Pixel + 'a> Iterator for EnumeratePixels<'a, P> where P::Subpixel: 'a, { type Item = (u32, u32, &'a P); #[inline(always)] fn next(&mut self) -> Option<(u32, u32, &'a P)> { if self.x >= self.width { self.x = 0; self.y += 1; } let (x, y) = (self.x, self.y); self.x += 1; self.pixels.next().map(|p| (x, y, p)) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for EnumeratePixels<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.pixels.len() } } impl Clone for EnumeratePixels<'_, P> { fn clone(&self) -> Self { EnumeratePixels { pixels: self.pixels.clone(), ..*self } } } impl fmt::Debug for EnumeratePixels<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("EnumeratePixels") .field("pixels", &self.pixels) .field("x", &self.x) .field("y", &self.y) .field("width", &self.width) .finish() } } /// Enumerate the rows of an image. pub struct EnumerateRows<'a, P: Pixel + 'a> where

::Subpixel: 'a, { rows: Rows<'a, P>, y: u32, width: u32, } impl<'a, P: Pixel + 'a> Iterator for EnumerateRows<'a, P> where P::Subpixel: 'a, { type Item = (u32, EnumeratePixels<'a, P>); #[inline(always)] fn next(&mut self) -> Option<(u32, EnumeratePixels<'a, P>)> { let y = self.y; self.y += 1; self.rows.next().map(|r| { ( y, EnumeratePixels { x: 0, y, width: self.width, pixels: r, }, ) }) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for EnumerateRows<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.rows.len() } } impl Clone for EnumerateRows<'_, P> { fn clone(&self) -> Self { EnumerateRows { rows: self.rows.clone(), ..*self } } } impl fmt::Debug for EnumerateRows<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("EnumerateRows") .field("rows", &self.rows) .field("y", &self.y) .field("width", &self.width) .finish() } } /// Enumerate the pixels of an image. pub struct EnumeratePixelsMut<'a, P: Pixel + 'a> where

::Subpixel: 'a, { pixels: PixelsMut<'a, P>, x: u32, y: u32, width: u32, } impl<'a, P: Pixel + 'a> Iterator for EnumeratePixelsMut<'a, P> where P::Subpixel: 'a, { type Item = (u32, u32, &'a mut P); #[inline(always)] fn next(&mut self) -> Option<(u32, u32, &'a mut P)> { if self.x >= self.width { self.x = 0; self.y += 1; } let (x, y) = (self.x, self.y); self.x += 1; self.pixels.next().map(|p| (x, y, p)) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for EnumeratePixelsMut<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.pixels.len() } } impl fmt::Debug for EnumeratePixelsMut<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("EnumeratePixelsMut") .field("pixels", &self.pixels) .field("x", &self.x) .field("y", &self.y) .field("width", &self.width) .finish() } } /// Enumerate the rows of an image. pub struct EnumerateRowsMut<'a, P: Pixel + 'a> where

::Subpixel: 'a, { rows: RowsMut<'a, P>, y: u32, width: u32, } impl<'a, P: Pixel + 'a> Iterator for EnumerateRowsMut<'a, P> where P::Subpixel: 'a, { type Item = (u32, EnumeratePixelsMut<'a, P>); #[inline(always)] fn next(&mut self) -> Option<(u32, EnumeratePixelsMut<'a, P>)> { let y = self.y; self.y += 1; self.rows.next().map(|r| { ( y, EnumeratePixelsMut { x: 0, y, width: self.width, pixels: r, }, ) }) } #[inline(always)] fn size_hint(&self) -> (usize, Option) { let len = self.len(); (len, Some(len)) } } impl<'a, P: Pixel + 'a> ExactSizeIterator for EnumerateRowsMut<'a, P> where P::Subpixel: 'a, { fn len(&self) -> usize { self.rows.len() } } impl fmt::Debug for EnumerateRowsMut<'_, P> where P::Subpixel: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("EnumerateRowsMut") .field("rows", &self.rows) .field("y", &self.y) .field("width", &self.width) .finish() } } /// Generic image buffer /// /// This is an image parameterised by its Pixel types, represented by a width and height and a /// container of channel data. It provides direct access to its pixels and implements the /// [`GenericImageView`] and [`GenericImage`] traits. In many ways, this is the standard buffer /// implementing those traits. Using this concrete type instead of a generic type parameter has /// been shown to improve performance. /// /// The crate defines a few type aliases with regularly used pixel types for your convenience, such /// as [`RgbImage`], [`GrayImage`] etc. /// /// [`GenericImage`]: trait.GenericImage.html /// [`GenericImageView`]: trait.GenericImageView.html /// [`RgbImage`]: type.RgbImage.html /// [`GrayImage`]: type.GrayImage.html /// /// To convert between images of different Pixel types use [`DynamicImage`]. /// /// You can retrieve a complete description of the buffer's layout and contents through /// [`as_flat_samples`] and [`as_flat_samples_mut`]. This can be handy to also use the contents in /// a foreign language, map it as a GPU host buffer or other similar tasks. /// /// [`DynamicImage`]: enum.DynamicImage.html /// [`as_flat_samples`]: #method.as_flat_samples /// [`as_flat_samples_mut`]: #method.as_flat_samples_mut /// /// ## Examples /// /// Create a simple canvas and paint a small cross. /// /// ``` /// use image::{RgbImage, Rgb}; /// /// let mut img = RgbImage::new(32, 32); /// /// for x in 15..=17 { /// for y in 8..24 { /// img.put_pixel(x, y, Rgb([255, 0, 0])); /// img.put_pixel(y, x, Rgb([255, 0, 0])); /// } /// } /// ``` /// /// Overlays an image on top of a larger background raster. /// /// ```no_run /// use image::{GenericImage, GenericImageView, ImageBuffer, open}; /// /// let on_top = open("path/to/some.png").unwrap().into_rgb8(); /// let mut img = ImageBuffer::from_fn(512, 512, |x, y| { /// if (x + y) % 2 == 0 { /// image::Rgb([0, 0, 0]) /// } else { /// image::Rgb([255, 255, 255]) /// } /// }); /// /// image::imageops::overlay(&mut img, &on_top, 128, 128); /// ``` /// /// Convert an `RgbaImage` to a `GrayImage`. /// /// ```no_run /// use image::{open, DynamicImage}; /// /// let rgba = open("path/to/some.png").unwrap().into_rgba8(); /// let gray = DynamicImage::ImageRgba8(rgba).into_luma8(); /// ``` #[derive(Debug, Hash, PartialEq, Eq)] pub struct ImageBuffer { width: u32, height: u32, _phantom: PhantomData

, data: Container, } // generic implementation, shared along all image buffers impl ImageBuffer where P: Pixel, Container: Deref, { /// Constructs a buffer from a generic container /// (for example a `Vec` or a slice) /// /// Returns `None` if the container is not big enough (including when the image dimensions /// necessitate an allocation of more bytes than supported by the container). pub fn from_raw(width: u32, height: u32, buf: Container) -> Option> { if Self::check_image_fits(width, height, buf.len()) { Some(ImageBuffer { data: buf, width, height, _phantom: PhantomData, }) } else { None } } /// Returns the underlying raw buffer pub fn into_raw(self) -> Container { self.data } /// Returns the underlying raw buffer pub fn as_raw(&self) -> &Container { &self.data } /// The width and height of this image. pub fn dimensions(&self) -> (u32, u32) { (self.width, self.height) } /// The width of this image. pub fn width(&self) -> u32 { self.width } /// The height of this image. pub fn height(&self) -> u32 { self.height } // TODO: choose name under which to expose. pub(crate) fn inner_pixels(&self) -> &[P::Subpixel] { let len = Self::image_buffer_len(self.width, self.height).unwrap(); &self.data[..len] } /// Returns an iterator over the pixels of this image. /// The iteration order is x = 0 to width then y = 0 to height pub fn pixels(&self) -> Pixels

{ Pixels { chunks: self .inner_pixels() .chunks_exact(

::CHANNEL_COUNT as usize), } } /// Returns an iterator over the rows of this image. /// /// Only non-empty rows can be iterated in this manner. In particular the iterator will not /// yield any item when the width of the image is `0` or a pixel type without any channels is /// used. This ensures that its length can always be represented by `usize`. pub fn rows(&self) -> Rows

{ Rows::with_image(&self.data, self.width, self.height) } /// Enumerates over the pixels of the image. /// The iterator yields the coordinates of each pixel /// along with a reference to them. /// The iteration order is x = 0 to width then y = 0 to height /// Starting from the top left. pub fn enumerate_pixels(&self) -> EnumeratePixels

{ EnumeratePixels { pixels: self.pixels(), x: 0, y: 0, width: self.width, } } /// Enumerates over the rows of the image. /// The iterator yields the y-coordinate of each row /// along with a reference to them. pub fn enumerate_rows(&self) -> EnumerateRows

{ EnumerateRows { rows: self.rows(), y: 0, width: self.width, } } /// Gets a reference to the pixel at location `(x, y)` /// /// # Panics /// /// Panics if `(x, y)` is out of the bounds `(width, height)`. #[inline] #[track_caller] pub fn get_pixel(&self, x: u32, y: u32) -> &P { match self.pixel_indices(x, y) { None => panic!( "Image index {:?} out of bounds {:?}", (x, y), (self.width, self.height) ), Some(pixel_indices) =>

::from_slice(&self.data[pixel_indices]), } } /// Gets a reference to the pixel at location `(x, y)` or returns `None` if /// the index is out of the bounds `(width, height)`. pub fn get_pixel_checked(&self, x: u32, y: u32) -> Option<&P> { if x >= self.width { return None; } let num_channels =

::CHANNEL_COUNT as usize; let i = (y as usize) .saturating_mul(self.width as usize) .saturating_add(x as usize) .saturating_mul(num_channels); self.data .get(i..i.checked_add(num_channels)?) .map(|pixel_indices|

::from_slice(pixel_indices)) } /// Test that the image fits inside the buffer. /// /// Verifies that the maximum image of pixels inside the bounds is smaller than the provided /// length. Note that as a corrolary we also have that the index calculation of pixels inside /// the bounds will not overflow. fn check_image_fits(width: u32, height: u32, len: usize) -> bool { let checked_len = Self::image_buffer_len(width, height); checked_len.map_or(false, |min_len| min_len <= len) } fn image_buffer_len(width: u32, height: u32) -> Option { Some(

::CHANNEL_COUNT as usize) .and_then(|size| size.checked_mul(width as usize)) .and_then(|size| size.checked_mul(height as usize)) } #[inline(always)] fn pixel_indices(&self, x: u32, y: u32) -> Option> { if x >= self.width || y >= self.height { return None; } Some(self.pixel_indices_unchecked(x, y)) } #[inline(always)] fn pixel_indices_unchecked(&self, x: u32, y: u32) -> Range { let no_channels =

::CHANNEL_COUNT as usize; // If in bounds, this can't overflow as we have tested that at construction! let min_index = (y as usize * self.width as usize + x as usize) * no_channels; min_index..min_index + no_channels } /// Get the format of the buffer when viewed as a matrix of samples. pub fn sample_layout(&self) -> SampleLayout { // None of these can overflow, as all our memory is addressable. SampleLayout::row_major_packed(

::CHANNEL_COUNT, self.width, self.height) } /// Return the raw sample buffer with its stride an dimension information. /// /// The returned buffer is guaranteed to be well formed in all cases. It is laid out by /// colors, width then height, meaning `channel_stride <= width_stride <= height_stride`. All /// strides are in numbers of elements but those are mostly `u8` in which case the strides are /// also byte strides. pub fn into_flat_samples(self) -> FlatSamples where Container: AsRef<[P::Subpixel]>, { // None of these can overflow, as all our memory is addressable. let layout = self.sample_layout(); FlatSamples { samples: self.data, layout, color_hint: None, // TODO: the pixel type might contain P::COLOR_TYPE if it satisfies PixelWithColorType } } /// Return a view on the raw sample buffer. /// /// See [`into_flat_samples`](#method.into_flat_samples) for more details. pub fn as_flat_samples(&self) -> FlatSamples<&[P::Subpixel]> where Container: AsRef<[P::Subpixel]>, { let layout = self.sample_layout(); FlatSamples { samples: self.data.as_ref(), layout, color_hint: None, // TODO: the pixel type might contain P::COLOR_TYPE if it satisfies PixelWithColorType } } /// Return a mutable view on the raw sample buffer. /// /// See [`into_flat_samples`](#method.into_flat_samples) for more details. pub fn as_flat_samples_mut(&mut self) -> FlatSamples<&mut [P::Subpixel]> where Container: AsMut<[P::Subpixel]>, { let layout = self.sample_layout(); FlatSamples { samples: self.data.as_mut(), layout, color_hint: None, // TODO: the pixel type might contain P::COLOR_TYPE if it satisfies PixelWithColorType } } } impl ImageBuffer where P: Pixel, Container: Deref + DerefMut, { // TODO: choose name under which to expose. pub(crate) fn inner_pixels_mut(&mut self) -> &mut [P::Subpixel] { let len = Self::image_buffer_len(self.width, self.height).unwrap(); &mut self.data[..len] } /// Returns an iterator over the mutable pixels of this image. pub fn pixels_mut(&mut self) -> PixelsMut

{ PixelsMut { chunks: self .inner_pixels_mut() .chunks_exact_mut(

::CHANNEL_COUNT as usize), } } /// Returns an iterator over the mutable rows of this image. /// /// Only non-empty rows can be iterated in this manner. In particular the iterator will not /// yield any item when the width of the image is `0` or a pixel type without any channels is /// used. This ensures that its length can always be represented by `usize`. pub fn rows_mut(&mut self) -> RowsMut

{ RowsMut::with_image(&mut self.data, self.width, self.height) } /// Enumerates over the pixels of the image. /// The iterator yields the coordinates of each pixel /// along with a mutable reference to them. pub fn enumerate_pixels_mut(&mut self) -> EnumeratePixelsMut

{ let width = self.width; EnumeratePixelsMut { pixels: self.pixels_mut(), x: 0, y: 0, width, } } /// Enumerates over the rows of the image. /// The iterator yields the y-coordinate of each row /// along with a mutable reference to them. pub fn enumerate_rows_mut(&mut self) -> EnumerateRowsMut

{ let width = self.width; EnumerateRowsMut { rows: self.rows_mut(), y: 0, width, } } /// Gets a reference to the mutable pixel at location `(x, y)` /// /// # Panics /// /// Panics if `(x, y)` is out of the bounds `(width, height)`. #[inline] #[track_caller] pub fn get_pixel_mut(&mut self, x: u32, y: u32) -> &mut P { match self.pixel_indices(x, y) { None => panic!( "Image index {:?} out of bounds {:?}", (x, y), (self.width, self.height) ), Some(pixel_indices) =>

::from_slice_mut(&mut self.data[pixel_indices]), } } /// Gets a reference to the mutable pixel at location `(x, y)` or returns /// `None` if the index is out of the bounds `(width, height)`. pub fn get_pixel_mut_checked(&mut self, x: u32, y: u32) -> Option<&mut P> { if x >= self.width { return None; } let num_channels =

::CHANNEL_COUNT as usize; let i = (y as usize) .saturating_mul(self.width as usize) .saturating_add(x as usize) .saturating_mul(num_channels); self.data .get_mut(i..i.checked_add(num_channels)?) .map(|pixel_indices|

::from_slice_mut(pixel_indices)) } /// Puts a pixel at location `(x, y)` /// /// # Panics /// /// Panics if `(x, y)` is out of the bounds `(width, height)`. #[inline] #[track_caller] pub fn put_pixel(&mut self, x: u32, y: u32, pixel: P) { *self.get_pixel_mut(x, y) = pixel; } } impl ImageBuffer where P: Pixel, [P::Subpixel]: EncodableLayout, Container: Deref, { /// Saves the buffer to a file at the path specified. /// /// The image format is derived from the file extension. pub fn save(&self, path: Q) -> ImageResult<()> where Q: AsRef, P: PixelWithColorType, { save_buffer( path, self.inner_pixels().as_bytes(), self.width(), self.height(),

::COLOR_TYPE, ) } } impl ImageBuffer where P: Pixel, [P::Subpixel]: EncodableLayout, Container: Deref, { /// Saves the buffer to a file at the specified path in /// the specified format. /// /// See [`save_buffer_with_format`](fn.save_buffer_with_format.html) for /// supported types. pub fn save_with_format(&self, path: Q, format: ImageFormat) -> ImageResult<()> where Q: AsRef, P: PixelWithColorType, { // This is valid as the subpixel is u8. save_buffer_with_format( path, self.inner_pixels().as_bytes(), self.width(), self.height(),

::COLOR_TYPE, format, ) } } impl ImageBuffer where P: Pixel, [P::Subpixel]: EncodableLayout, Container: Deref, { /// Writes the buffer to a writer in the specified format. /// /// Assumes the writer is buffered. In most cases, you should wrap your writer in a `BufWriter` /// for best performance. pub fn write_to(&self, writer: &mut W, format: ImageFormat) -> ImageResult<()> where W: std::io::Write + std::io::Seek, P: PixelWithColorType, { // This is valid as the subpixel is u8. write_buffer_with_format( writer, self.inner_pixels().as_bytes(), self.width(), self.height(),

::COLOR_TYPE, format, ) } } impl ImageBuffer where P: Pixel, [P::Subpixel]: EncodableLayout, Container: Deref, { /// Writes the buffer with the given encoder. pub fn write_with_encoder(&self, encoder: E) -> ImageResult<()> where E: ImageEncoder, P: PixelWithColorType, { // This is valid as the subpixel is u8. encoder.write_image( self.inner_pixels().as_bytes(), self.width(), self.height(),

::COLOR_TYPE, ) } } impl Default for ImageBuffer where P: Pixel, Container: Default, { fn default() -> Self { Self { width: 0, height: 0, _phantom: PhantomData, data: Default::default(), } } } impl Deref for ImageBuffer where P: Pixel, Container: Deref, { type Target = [P::Subpixel]; fn deref(&self) -> &::Target { &self.data } } impl DerefMut for ImageBuffer where P: Pixel, Container: Deref + DerefMut, { fn deref_mut(&mut self) -> &mut ::Target { &mut self.data } } impl Index<(u32, u32)> for ImageBuffer where P: Pixel, Container: Deref, { type Output = P; fn index(&self, (x, y): (u32, u32)) -> &P { self.get_pixel(x, y) } } impl IndexMut<(u32, u32)> for ImageBuffer where P: Pixel, Container: Deref + DerefMut, { fn index_mut(&mut self, (x, y): (u32, u32)) -> &mut P { self.get_pixel_mut(x, y) } } impl Clone for ImageBuffer where P: Pixel, Container: Deref + Clone, { fn clone(&self) -> ImageBuffer { ImageBuffer { data: self.data.clone(), width: self.width, height: self.height, _phantom: PhantomData, } } fn clone_from(&mut self, source: &Self) { self.data.clone_from(&source.data); self.width = source.width; self.height = source.height; } } impl GenericImageView for ImageBuffer where P: Pixel, Container: Deref + Deref, { type Pixel = P; fn dimensions(&self) -> (u32, u32) { self.dimensions() } fn get_pixel(&self, x: u32, y: u32) -> P { *self.get_pixel(x, y) } /// Returns the pixel located at (x, y), ignoring bounds checking. #[inline(always)] unsafe fn unsafe_get_pixel(&self, x: u32, y: u32) -> P { let indices = self.pixel_indices_unchecked(x, y); *

::from_slice(self.data.get_unchecked(indices)) } } impl GenericImage for ImageBuffer where P: Pixel, Container: Deref + DerefMut, { fn get_pixel_mut(&mut self, x: u32, y: u32) -> &mut P { self.get_pixel_mut(x, y) } fn put_pixel(&mut self, x: u32, y: u32, pixel: P) { *self.get_pixel_mut(x, y) = pixel; } /// Puts a pixel at location (x, y), ignoring bounds checking. #[inline(always)] unsafe fn unsafe_put_pixel(&mut self, x: u32, y: u32, pixel: P) { let indices = self.pixel_indices_unchecked(x, y); let p =

::from_slice_mut(self.data.get_unchecked_mut(indices)); *p = pixel; } /// Put a pixel at location (x, y), taking into account alpha channels /// /// DEPRECATED: This method will be removed. Blend the pixel directly instead. fn blend_pixel(&mut self, x: u32, y: u32, p: P) { self.get_pixel_mut(x, y).blend(&p); } fn copy_within(&mut self, source: Rect, x: u32, y: u32) -> bool { let Rect { x: sx, y: sy, width, height, } = source; let dx = x; let dy = y; assert!(sx < self.width() && dx < self.width()); assert!(sy < self.height() && dy < self.height()); if self.width() - dx.max(sx) < width || self.height() - dy.max(sy) < height { return false; } if sy < dy { for y in (0..height).rev() { let sy = sy + y; let dy = dy + y; let Range { start, .. } = self.pixel_indices_unchecked(sx, sy); let Range { end, .. } = self.pixel_indices_unchecked(sx + width - 1, sy); let dst = self.pixel_indices_unchecked(dx, dy).start; self.data.copy_within(start..end, dst); } } else { for y in 0..height { let sy = sy + y; let dy = dy + y; let Range { start, .. } = self.pixel_indices_unchecked(sx, sy); let Range { end, .. } = self.pixel_indices_unchecked(sx + width - 1, sy); let dst = self.pixel_indices_unchecked(dx, dy).start; self.data.copy_within(start..end, dst); } } true } } // concrete implementation for `Vec`-backed buffers // TODO: I think that rustc does not "see" this impl any more: the impl with // Container meets the same requirements. At least, I got compile errors that // there is no such function as `into_vec`, whereas `into_raw` did work, and // `into_vec` is redundant anyway, because `into_raw` will give you the vector, // and it is more generic. impl ImageBuffer> { /// Creates a new image buffer based on a `Vec`. /// /// all the pixels of this image have a value of zero, regardless of the data type or number of channels. /// /// # Panics /// /// Panics when the resulting image is larger than the maximum size of a vector. #[must_use] pub fn new(width: u32, height: u32) -> ImageBuffer> { let size = Self::image_buffer_len(width, height) .expect("Buffer length in `ImageBuffer::new` overflows usize"); ImageBuffer { data: vec![Zero::zero(); size], width, height, _phantom: PhantomData, } } /// Constructs a new `ImageBuffer` by copying a pixel /// /// # Panics /// /// Panics when the resulting image is larger the the maximum size of a vector. pub fn from_pixel(width: u32, height: u32, pixel: P) -> ImageBuffer> { let mut buf = ImageBuffer::new(width, height); for p in buf.pixels_mut() { *p = pixel; } buf } /// Constructs a new `ImageBuffer` by repeated application of the supplied function. /// /// The arguments to the function are the pixel's x and y coordinates. /// /// # Panics /// /// Panics when the resulting image is larger the the maximum size of a vector. pub fn from_fn(width: u32, height: u32, mut f: F) -> ImageBuffer> where F: FnMut(u32, u32) -> P, { let mut buf = ImageBuffer::new(width, height); for (x, y, p) in buf.enumerate_pixels_mut() { *p = f(x, y); } buf } /// Creates an image buffer out of an existing buffer. /// Returns None if the buffer is not big enough. #[must_use] pub fn from_vec( width: u32, height: u32, buf: Vec, ) -> Option>> { ImageBuffer::from_raw(width, height, buf) } /// Consumes the image buffer and returns the underlying data /// as an owned buffer #[must_use] pub fn into_vec(self) -> Vec { self.into_raw() } } /// Provides color conversions for whole image buffers. pub trait ConvertBuffer { /// Converts `self` to a buffer of type T /// /// A generic implementation is provided to convert any image buffer to a image buffer /// based on a `Vec`. fn convert(&self) -> T; } // concrete implementation Luma -> Rgba impl GrayImage { /// Expands a color palette by re-using the existing buffer. /// Assumes 8 bit per pixel. Uses an optionally transparent index to /// adjust it's alpha value accordingly. #[must_use] pub fn expand_palette( self, palette: &[(u8, u8, u8)], transparent_idx: Option, ) -> RgbaImage { let (width, height) = self.dimensions(); let mut data = self.into_raw(); let entries = data.len(); data.resize(entries.checked_mul(4).unwrap(), 0); let mut buffer = ImageBuffer::from_vec(width, height, data).unwrap(); expand_packed(&mut buffer, 4, 8, |idx, pixel| { let (r, g, b) = palette[idx as usize]; let a = if let Some(t_idx) = transparent_idx { if t_idx == idx { 0 } else { 255 } } else { 255 }; pixel[0] = r; pixel[1] = g; pixel[2] = b; pixel[3] = a; }); buffer } } // TODO: Equality constraints are not yet supported in where clauses, when they // are, the T parameter should be removed in favor of ToType::Subpixel, which // will then be FromType::Subpixel. impl ConvertBuffer>> for ImageBuffer where Container: Deref, ToType: FromColor, { /// # Examples /// Convert RGB image to gray image. /// ```no_run /// use image::buffer::ConvertBuffer; /// use image::GrayImage; /// /// let image_path = "examples/fractal.png"; /// let image = image::open(&image_path) /// .expect("Open file failed") /// .to_rgba8(); /// /// let gray_image: GrayImage = image.convert(); /// ``` fn convert(&self) -> ImageBuffer> { let mut buffer: ImageBuffer> = ImageBuffer::new(self.width, self.height); for (to, from) in buffer.pixels_mut().zip(self.pixels()) { to.from_color(from); } buffer } } /// Sendable Rgb image buffer pub type RgbImage = ImageBuffer, Vec>; /// Sendable Rgb + alpha channel image buffer pub type RgbaImage = ImageBuffer, Vec>; /// Sendable grayscale image buffer pub type GrayImage = ImageBuffer, Vec>; /// Sendable grayscale + alpha channel image buffer pub type GrayAlphaImage = ImageBuffer, Vec>; /// Sendable 16-bit Rgb image buffer pub(crate) type Rgb16Image = ImageBuffer, Vec>; /// Sendable 16-bit Rgb + alpha channel image buffer pub(crate) type Rgba16Image = ImageBuffer, Vec>; /// Sendable 16-bit grayscale image buffer pub(crate) type Gray16Image = ImageBuffer, Vec>; /// Sendable 16-bit grayscale + alpha channel image buffer pub(crate) type GrayAlpha16Image = ImageBuffer, Vec>; /// An image buffer for 32-bit float RGB pixels, /// where the backing container is a flattened vector of floats. pub type Rgb32FImage = ImageBuffer, Vec>; /// An image buffer for 32-bit float RGBA pixels, /// where the backing container is a flattened vector of floats. pub type Rgba32FImage = ImageBuffer, Vec>; impl From for RgbImage { fn from(value: DynamicImage) -> Self { value.into_rgb8() } } impl From for RgbaImage { fn from(value: DynamicImage) -> Self { value.into_rgba8() } } impl From for GrayImage { fn from(value: DynamicImage) -> Self { value.into_luma8() } } impl From for GrayAlphaImage { fn from(value: DynamicImage) -> Self { value.into_luma_alpha8() } } impl From for Rgb16Image { fn from(value: DynamicImage) -> Self { value.into_rgb16() } } impl From for Rgba16Image { fn from(value: DynamicImage) -> Self { value.into_rgba16() } } impl From for Gray16Image { fn from(value: DynamicImage) -> Self { value.into_luma16() } } impl From for GrayAlpha16Image { fn from(value: DynamicImage) -> Self { value.into_luma_alpha16() } } impl From for Rgba32FImage { fn from(value: DynamicImage) -> Self { value.into_rgba32f() } } #[cfg(test)] mod test { use super::{GrayImage, ImageBuffer, RgbImage}; use crate::math::Rect; use crate::GenericImage as _; use crate::ImageFormat; use crate::{Luma, LumaA, Pixel, Rgb, Rgba}; use num_traits::Zero; #[test] /// Tests if image buffers from slices work fn slice_buffer() { let data = [0; 9]; let buf: ImageBuffer, _> = ImageBuffer::from_raw(3, 3, &data[..]).unwrap(); assert_eq!(&*buf, &data[..]) } macro_rules! new_buffer_zero_test { ($test_name:ident, $pxt:ty) => { #[test] fn $test_name() { let buffer = ImageBuffer::<$pxt, Vec<<$pxt as Pixel>::Subpixel>>::new(2, 2); assert!(buffer .iter() .all(|p| *p == <$pxt as Pixel>::Subpixel::zero())); } }; } new_buffer_zero_test!(luma_u8_zero_test, Luma); new_buffer_zero_test!(luma_u16_zero_test, Luma); new_buffer_zero_test!(luma_f32_zero_test, Luma); new_buffer_zero_test!(luma_a_u8_zero_test, LumaA); new_buffer_zero_test!(luma_a_u16_zero_test, LumaA); new_buffer_zero_test!(luma_a_f32_zero_test, LumaA); new_buffer_zero_test!(rgb_u8_zero_test, Rgb); new_buffer_zero_test!(rgb_u16_zero_test, Rgb); new_buffer_zero_test!(rgb_f32_zero_test, Rgb); new_buffer_zero_test!(rgb_a_u8_zero_test, Rgba); new_buffer_zero_test!(rgb_a_u16_zero_test, Rgba); new_buffer_zero_test!(rgb_a_f32_zero_test, Rgba); #[test] fn get_pixel() { let mut a: RgbImage = ImageBuffer::new(10, 10); { let b = a.get_mut(3 * 10).unwrap(); *b = 255; } assert_eq!(a.get_pixel(0, 1)[0], 255) } #[test] fn get_pixel_checked() { let mut a: RgbImage = ImageBuffer::new(10, 10); a.get_pixel_mut_checked(0, 1).unwrap()[0] = 255; assert_eq!(a.get_pixel_checked(0, 1), Some(&Rgb([255, 0, 0]))); assert_eq!(a.get_pixel_checked(0, 1).unwrap(), a.get_pixel(0, 1)); assert_eq!(a.get_pixel_checked(10, 0), None); assert_eq!(a.get_pixel_checked(0, 10), None); assert_eq!(a.get_pixel_mut_checked(10, 0), None); assert_eq!(a.get_pixel_mut_checked(0, 10), None); // From image/issues/1672 const WHITE: Rgb = Rgb([255_u8, 255, 255]); let mut a = RgbImage::new(2, 1); a.put_pixel(1, 0, WHITE); assert_eq!(a.get_pixel_checked(1, 0), Some(&WHITE)); assert_eq!(a.get_pixel_checked(1, 0).unwrap(), a.get_pixel(1, 0)); } #[test] fn mut_iter() { let mut a: RgbImage = ImageBuffer::new(10, 10); { let val = a.pixels_mut().next().unwrap(); *val = Rgb([42, 0, 0]); } assert_eq!(a.data[0], 42); } #[test] fn zero_width_zero_height() { let mut image = RgbImage::new(0, 0); assert_eq!(image.rows_mut().count(), 0); assert_eq!(image.pixels_mut().count(), 0); assert_eq!(image.rows().count(), 0); assert_eq!(image.pixels().count(), 0); } #[test] fn zero_width_nonzero_height() { let mut image = RgbImage::new(0, 2); assert_eq!(image.rows_mut().count(), 0); assert_eq!(image.pixels_mut().count(), 0); assert_eq!(image.rows().count(), 0); assert_eq!(image.pixels().count(), 0); } #[test] fn nonzero_width_zero_height() { let mut image = RgbImage::new(2, 0); assert_eq!(image.rows_mut().count(), 0); assert_eq!(image.pixels_mut().count(), 0); assert_eq!(image.rows().count(), 0); assert_eq!(image.pixels().count(), 0); } #[test] fn pixels_on_large_buffer() { let mut image = RgbImage::from_raw(1, 1, vec![0; 6]).unwrap(); assert_eq!(image.pixels().count(), 1); assert_eq!(image.enumerate_pixels().count(), 1); assert_eq!(image.pixels_mut().count(), 1); assert_eq!(image.enumerate_pixels_mut().count(), 1); assert_eq!(image.rows().count(), 1); assert_eq!(image.rows_mut().count(), 1); } #[test] fn default() { let image = ImageBuffer::, Vec>::default(); assert_eq!(image.dimensions(), (0, 0)); } #[test] #[rustfmt::skip] fn test_image_buffer_copy_within_oob() { let mut image: GrayImage = ImageBuffer::from_raw(4, 4, vec![0u8; 16]).unwrap(); assert!(!image.copy_within(Rect { x: 0, y: 0, width: 5, height: 4 }, 0, 0)); assert!(!image.copy_within(Rect { x: 0, y: 0, width: 4, height: 5 }, 0, 0)); assert!(!image.copy_within(Rect { x: 1, y: 0, width: 4, height: 4 }, 0, 0)); assert!(!image.copy_within(Rect { x: 0, y: 0, width: 4, height: 4 }, 1, 0)); assert!(!image.copy_within(Rect { x: 0, y: 1, width: 4, height: 4 }, 0, 0)); assert!(!image.copy_within(Rect { x: 0, y: 0, width: 4, height: 4 }, 0, 1)); assert!(!image.copy_within(Rect { x: 1, y: 1, width: 4, height: 4 }, 0, 0)); } #[test] fn test_image_buffer_copy_within_tl() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [0, 1, 2, 3, 4, 0, 1, 2, 8, 4, 5, 6, 12, 8, 9, 10]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.copy_within( Rect { x: 0, y: 0, width: 3, height: 3 }, 1, 1 )); assert_eq!(&image.into_raw(), &expected); } #[test] fn test_image_buffer_copy_within_tr() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [0, 1, 2, 3, 1, 2, 3, 7, 5, 6, 7, 11, 9, 10, 11, 15]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.copy_within( Rect { x: 1, y: 0, width: 3, height: 3 }, 0, 1 )); assert_eq!(&image.into_raw(), &expected); } #[test] fn test_image_buffer_copy_within_bl() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [0, 4, 5, 6, 4, 8, 9, 10, 8, 12, 13, 14, 12, 13, 14, 15]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.copy_within( Rect { x: 0, y: 1, width: 3, height: 3 }, 1, 0 )); assert_eq!(&image.into_raw(), &expected); } #[test] fn test_image_buffer_copy_within_br() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [5, 6, 7, 3, 9, 10, 11, 7, 13, 14, 15, 11, 12, 13, 14, 15]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.copy_within( Rect { x: 1, y: 1, width: 3, height: 3 }, 0, 0 )); assert_eq!(&image.into_raw(), &expected); } #[test] #[cfg(feature = "png")] fn write_to_with_large_buffer() { // A buffer of 1 pixel, padded to 4 bytes as would be common in, e.g. BMP. let img: GrayImage = ImageBuffer::from_raw(1, 1, vec![0u8; 4]).unwrap(); let mut buffer = std::io::Cursor::new(vec![]); assert!(img.write_to(&mut buffer, ImageFormat::Png).is_ok()); } #[test] fn exact_size_iter_size_hint() { // The docs for `std::iter::ExactSizeIterator` requires that the implementation of // `size_hint` on the iterator returns the same value as the `len` implementation. // This test should work for any size image. const N: u32 = 10; let mut image = RgbImage::from_raw(N, N, vec![0; (N * N * 3) as usize]).unwrap(); let iter = image.pixels(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); let iter = image.pixels_mut(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); let iter = image.rows(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); let iter = image.rows_mut(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); let iter = image.enumerate_pixels(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); let iter = image.enumerate_rows(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); let iter = image.enumerate_pixels_mut(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); let iter = image.enumerate_rows_mut(); let exact_len = ExactSizeIterator::len(&iter); assert_eq!(iter.size_hint(), (exact_len, Some(exact_len))); } } #[cfg(test)] #[cfg(feature = "benchmarks")] mod benchmarks { use super::{ConvertBuffer, GrayImage, ImageBuffer, Pixel, RgbImage}; #[bench] fn conversion(b: &mut test::Bencher) { let mut a: RgbImage = ImageBuffer::new(1000, 1000); for p in a.pixels_mut() { let rgb = p.channels_mut(); rgb[0] = 255; rgb[1] = 23; rgb[2] = 42; } assert!(a.data[0] != 0); b.iter(|| { let b: GrayImage = a.convert(); assert!(0 != b.data[0]); assert!(a.data[0] != b.data[0]); test::black_box(b); }); b.bytes = 1000 * 1000 * 3 } #[bench] fn image_access_row_by_row(b: &mut test::Bencher) { let mut a: RgbImage = ImageBuffer::new(1000, 1000); for p in a.pixels_mut() { let rgb = p.channels_mut(); rgb[0] = 255; rgb[1] = 23; rgb[2] = 42; } b.iter(move || { let image: &RgbImage = test::black_box(&a); let mut sum: usize = 0; for y in 0..1000 { for x in 0..1000 { let pixel = image.get_pixel(x, y); sum = sum.wrapping_add(pixel[0] as usize); sum = sum.wrapping_add(pixel[1] as usize); sum = sum.wrapping_add(pixel[2] as usize); } } test::black_box(sum) }); b.bytes = 1000 * 1000 * 3; } #[bench] fn image_access_col_by_col(b: &mut test::Bencher) { let mut a: RgbImage = ImageBuffer::new(1000, 1000); for p in a.pixels_mut() { let rgb = p.channels_mut(); rgb[0] = 255; rgb[1] = 23; rgb[2] = 42; } b.iter(move || { let image: &RgbImage = test::black_box(&a); let mut sum: usize = 0; for x in 0..1000 { for y in 0..1000 { let pixel = image.get_pixel(x, y); sum = sum.wrapping_add(pixel[0] as usize); sum = sum.wrapping_add(pixel[1] as usize); sum = sum.wrapping_add(pixel[2] as usize); } } test::black_box(sum) }); b.bytes = 1000 * 1000 * 3; } } image-0.25.5/src/buffer_par.rs000064400000000000000000000313351046102023000142540ustar 00000000000000use rayon::iter::plumbing::*; use rayon::iter::{IndexedParallelIterator, ParallelIterator}; use rayon::slice::{ChunksExact, ChunksExactMut, ParallelSlice, ParallelSliceMut}; use std::fmt; use std::ops::{Deref, DerefMut}; use crate::traits::Pixel; use crate::ImageBuffer; /// Parallel iterator over pixel refs. #[derive(Clone)] pub struct PixelsPar<'a, P> where P: Pixel + Sync + 'a, P::Subpixel: Sync + 'a, { chunks: ChunksExact<'a, P::Subpixel>, } impl<'a, P> ParallelIterator for PixelsPar<'a, P> where P: Pixel + Sync + 'a, P::Subpixel: Sync + 'a, { type Item = &'a P; fn drive_unindexed(self, consumer: C) -> C::Result where C: UnindexedConsumer, { self.chunks .map(|v|

::from_slice(v)) .drive_unindexed(consumer) } fn opt_len(&self) -> Option { Some(self.len()) } } impl<'a, P> IndexedParallelIterator for PixelsPar<'a, P> where P: Pixel + Sync + 'a, P::Subpixel: Sync + 'a, { fn drive>(self, consumer: C) -> C::Result { self.chunks .map(|v|

::from_slice(v)) .drive(consumer) } fn len(&self) -> usize { self.chunks.len() } fn with_producer>(self, callback: CB) -> CB::Output { self.chunks .map(|v|

::from_slice(v)) .with_producer(callback) } } impl

fmt::Debug for PixelsPar<'_, P> where P: Pixel + Sync, P::Subpixel: Sync + fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("PixelsPar") .field("chunks", &self.chunks) .finish() } } /// Parallel iterator over mutable pixel refs. pub struct PixelsMutPar<'a, P> where P: Pixel + Send + Sync + 'a, P::Subpixel: Send + Sync + 'a, { chunks: ChunksExactMut<'a, P::Subpixel>, } impl<'a, P> ParallelIterator for PixelsMutPar<'a, P> where P: Pixel + Send + Sync + 'a, P::Subpixel: Send + Sync + 'a, { type Item = &'a mut P; fn drive_unindexed(self, consumer: C) -> C::Result where C: UnindexedConsumer, { self.chunks .map(|v|

::from_slice_mut(v)) .drive_unindexed(consumer) } fn opt_len(&self) -> Option { Some(self.len()) } } impl<'a, P> IndexedParallelIterator for PixelsMutPar<'a, P> where P: Pixel + Send + Sync + 'a, P::Subpixel: Send + Sync + 'a, { fn drive>(self, consumer: C) -> C::Result { self.chunks .map(|v|

::from_slice_mut(v)) .drive(consumer) } fn len(&self) -> usize { self.chunks.len() } fn with_producer>(self, callback: CB) -> CB::Output { self.chunks .map(|v|

::from_slice_mut(v)) .with_producer(callback) } } impl

fmt::Debug for PixelsMutPar<'_, P> where P: Pixel + Send + Sync, P::Subpixel: Send + Sync + fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("PixelsMutPar") .field("chunks", &self.chunks) .finish() } } /// Parallel iterator over pixel refs and their coordinates. #[derive(Clone)] pub struct EnumeratePixelsPar<'a, P> where P: Pixel + Sync + 'a, P::Subpixel: Sync + 'a, { pixels: PixelsPar<'a, P>, width: u32, } impl<'a, P> ParallelIterator for EnumeratePixelsPar<'a, P> where P: Pixel + Sync + 'a, P::Subpixel: Sync + 'a, { type Item = (u32, u32, &'a P); fn drive_unindexed(self, consumer: C) -> C::Result where C: UnindexedConsumer, { self.pixels .enumerate() .map(|(i, p)| { ( (i % self.width as usize) as u32, (i / self.width as usize) as u32, p, ) }) .drive_unindexed(consumer) } fn opt_len(&self) -> Option { Some(self.len()) } } impl<'a, P> IndexedParallelIterator for EnumeratePixelsPar<'a, P> where P: Pixel + Sync + 'a, P::Subpixel: Sync + 'a, { fn drive>(self, consumer: C) -> C::Result { self.pixels .enumerate() .map(|(i, p)| { ( (i % self.width as usize) as u32, (i / self.width as usize) as u32, p, ) }) .drive(consumer) } fn len(&self) -> usize { self.pixels.len() } fn with_producer>(self, callback: CB) -> CB::Output { self.pixels .enumerate() .map(|(i, p)| { ( (i % self.width as usize) as u32, (i / self.width as usize) as u32, p, ) }) .with_producer(callback) } } impl

fmt::Debug for EnumeratePixelsPar<'_, P> where P: Pixel + Sync, P::Subpixel: Sync + fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("EnumeratePixelsPar") .field("pixels", &self.pixels) .field("width", &self.width) .finish() } } /// Parallel iterator over mutable pixel refs and their coordinates. pub struct EnumeratePixelsMutPar<'a, P> where P: Pixel + Send + Sync + 'a, P::Subpixel: Send + Sync + 'a, { pixels: PixelsMutPar<'a, P>, width: u32, } impl<'a, P> ParallelIterator for EnumeratePixelsMutPar<'a, P> where P: Pixel + Send + Sync + 'a, P::Subpixel: Send + Sync + 'a, { type Item = (u32, u32, &'a mut P); fn drive_unindexed(self, consumer: C) -> C::Result where C: UnindexedConsumer, { self.pixels .enumerate() .map(|(i, p)| { ( (i % self.width as usize) as u32, (i / self.width as usize) as u32, p, ) }) .drive_unindexed(consumer) } fn opt_len(&self) -> Option { Some(self.len()) } } impl<'a, P> IndexedParallelIterator for EnumeratePixelsMutPar<'a, P> where P: Pixel + Send + Sync + 'a, P::Subpixel: Send + Sync + 'a, { fn drive>(self, consumer: C) -> C::Result { self.pixels .enumerate() .map(|(i, p)| { ( (i % self.width as usize) as u32, (i / self.width as usize) as u32, p, ) }) .drive(consumer) } fn len(&self) -> usize { self.pixels.len() } fn with_producer>(self, callback: CB) -> CB::Output { self.pixels .enumerate() .map(|(i, p)| { ( (i % self.width as usize) as u32, (i / self.width as usize) as u32, p, ) }) .with_producer(callback) } } impl

fmt::Debug for EnumeratePixelsMutPar<'_, P> where P: Pixel + Send + Sync, P::Subpixel: Send + Sync + fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("EnumeratePixelsMutPar") .field("pixels", &self.pixels) .field("width", &self.width) .finish() } } impl ImageBuffer where P: Pixel + Sync, P::Subpixel: Sync, Container: Deref, { /// Returns a parallel iterator over the pixels of this image, usable with `rayon`. /// See [`pixels`] for more information. /// /// [`pixels`]: #method.pixels pub fn par_pixels(&self) -> PixelsPar

{ PixelsPar { chunks: self .inner_pixels() .par_chunks_exact(

::CHANNEL_COUNT as usize), } } /// Returns a parallel iterator over the pixels of this image and their coordinates, usable with `rayon`. /// See [`enumerate_pixels`] for more information. /// /// [`enumerate_pixels`]: #method.enumerate_pixels pub fn par_enumerate_pixels(&self) -> EnumeratePixelsPar

{ EnumeratePixelsPar { pixels: self.par_pixels(), width: self.width(), } } } impl ImageBuffer where P: Pixel + Send + Sync, P::Subpixel: Send + Sync, Container: Deref + DerefMut, { /// Returns a parallel iterator over the mutable pixels of this image, usable with `rayon`. /// See [`pixels_mut`] for more information. /// /// [`pixels_mut`]: #method.pixels_mut pub fn par_pixels_mut(&mut self) -> PixelsMutPar

{ PixelsMutPar { chunks: self .inner_pixels_mut() .par_chunks_exact_mut(

::CHANNEL_COUNT as usize), } } /// Returns a parallel iterator over the mutable pixels of this image and their coordinates, usable with `rayon`. /// See [`enumerate_pixels_mut`] for more information. /// /// [`enumerate_pixels_mut`]: #method.enumerate_pixels_mut pub fn par_enumerate_pixels_mut(&mut self) -> EnumeratePixelsMutPar

{ let width = self.width(); EnumeratePixelsMutPar { pixels: self.par_pixels_mut(), width, } } } impl

ImageBuffer> where P: Pixel + Send + Sync, P::Subpixel: Send + Sync, { /// Constructs a new `ImageBuffer` by repeated application of the supplied function, /// utilizing multi-threading via `rayon`. /// /// The arguments to the function are the pixel's x and y coordinates. /// /// # Panics /// /// Panics when the resulting image is larger the the maximum size of a vector. pub fn from_par_fn(width: u32, height: u32, f: F) -> ImageBuffer> where F: Fn(u32, u32) -> P + Send + Sync, { let mut buf = ImageBuffer::new(width, height); buf.par_enumerate_pixels_mut().for_each(|(x, y, p)| { *p = f(x, y); }); buf } } #[cfg(test)] mod test { use crate::{Rgb, RgbImage}; use rayon::iter::{IndexedParallelIterator, ParallelIterator}; fn test_width_height(width: u32, height: u32, len: usize) { let mut image = RgbImage::new(width, height); assert_eq!(image.par_enumerate_pixels_mut().len(), len); assert_eq!(image.par_enumerate_pixels().len(), len); assert_eq!(image.par_pixels_mut().len(), len); assert_eq!(image.par_pixels().len(), len); } #[test] fn zero_width_zero_height() { test_width_height(0, 0, 0); } #[test] fn zero_width_nonzero_height() { test_width_height(0, 2, 0); } #[test] fn nonzero_width_zero_height() { test_width_height(2, 0, 0); } #[test] fn iter_parity() { let mut image1 = RgbImage::from_fn(17, 29, |x, y| { Rgb(std::array::from_fn(|i| { ((x + y * 98 + i as u32 * 27) % 255) as u8 })) }); let mut image2 = image1.clone(); assert_eq!( image1.enumerate_pixels_mut().collect::>(), image2.par_enumerate_pixels_mut().collect::>() ); assert_eq!( image1.enumerate_pixels().collect::>(), image2.par_enumerate_pixels().collect::>() ); assert_eq!( image1.pixels_mut().collect::>(), image2.par_pixels_mut().collect::>() ); assert_eq!( image1.pixels().collect::>(), image2.par_pixels().collect::>() ); } } #[cfg(test)] #[cfg(feature = "benchmarks")] mod benchmarks { use crate::{Rgb, RgbImage}; const S: u32 = 1024; #[bench] fn creation(b: &mut test::Bencher) { let mut bytes = 0; b.iter(|| { let img = RgbImage::from_fn(S, S, |_, _| test::black_box(pixel_func())); bytes += img.as_raw().len() as u64; }); b.bytes = bytes; } #[bench] fn creation_par(b: &mut test::Bencher) { let mut bytes = 0; b.iter(|| { let img = RgbImage::from_par_fn(S, S, |_, _| test::black_box(pixel_func())); bytes += img.as_raw().len() as u64; }); b.bytes = bytes; } fn pixel_func() -> Rgb { use std::collections::hash_map::RandomState; use std::hash::{BuildHasher, Hasher}; Rgb(std::array::from_fn(|_| { RandomState::new().build_hasher().finish() as u8 })) } } image-0.25.5/src/codecs/avif/decoder.rs000064400000000000000000000517261046102023000157410ustar 00000000000000//! Decoding of AVIF images. use crate::error::{ DecodingError, ImageFormatHint, LimitError, LimitErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::{ColorType, ImageDecoder, ImageError, ImageFormat, ImageResult}; /// /// The [AVIF] specification defines an image derivative of the AV1 bitstream, an open video codec. /// /// [AVIF]: https://aomediacodec.github.io/av1-avif/ use std::error::Error; use std::fmt::{Display, Formatter}; use std::io::Read; use std::marker::PhantomData; use crate::codecs::avif::yuv::*; use dav1d::{PixelLayout, PlanarImageComponent}; use mp4parse::{read_avif, ParseStrictness}; fn error_map>>(err: E) -> ImageError { ImageError::Decoding(DecodingError::new(ImageFormat::Avif.into(), err)) } /// AVIF Decoder. /// /// Reads one image into the chosen input. pub struct AvifDecoder { inner: PhantomData, picture: dav1d::Picture, alpha_picture: Option, icc_profile: Option>, } #[derive(Debug, Clone, PartialEq, Eq)] enum AvifDecoderError { AlphaPlaneFormat(PixelLayout), YuvLayoutOnIdentityMatrix(PixelLayout), } impl Display for AvifDecoderError { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { match self { AvifDecoderError::AlphaPlaneFormat(pixel_layout) => match pixel_layout { PixelLayout::I400 => unreachable!("This option must be handled correctly"), PixelLayout::I420 => f.write_str("Alpha layout must be 4:0:0 but it was 4:2:0"), PixelLayout::I422 => f.write_str("Alpha layout must be 4:0:0 but it was 4:2:2"), PixelLayout::I444 => f.write_str("Alpha layout must be 4:0:0 but it was 4:4:4"), }, AvifDecoderError::YuvLayoutOnIdentityMatrix(pixel_layout) => match pixel_layout { PixelLayout::I400 => { f.write_str("YUV layout on 'Identity' matrix must be 4:4:4 but it was 4:0:0") } PixelLayout::I420 => { f.write_str("YUV layout on 'Identity' matrix must be 4:4:4 but it was 4:2:0") } PixelLayout::I422 => { f.write_str("YUV layout on 'Identity' matrix must be 4:4:4 but it was 4:2:2") } PixelLayout::I444 => unreachable!("This option must be handled correctly"), }, } } } impl Error for AvifDecoderError {} impl AvifDecoder { /// Create a new decoder that reads its input from `r`. pub fn new(mut r: R) -> ImageResult { let ctx = read_avif(&mut r, ParseStrictness::Normal).map_err(error_map)?; let coded = ctx.primary_item_coded_data().unwrap_or_default(); let mut primary_decoder = dav1d::Decoder::new().map_err(error_map)?; primary_decoder .send_data(coded.to_vec(), None, None, None) .map_err(error_map)?; let picture = read_until_ready(&mut primary_decoder)?; let alpha_item = ctx.alpha_item_coded_data().unwrap_or_default(); let alpha_picture = if !alpha_item.is_empty() { let mut alpha_decoder = dav1d::Decoder::new().map_err(error_map)?; alpha_decoder .send_data(alpha_item.to_vec(), None, None, None) .map_err(error_map)?; Some(read_until_ready(&mut alpha_decoder)?) } else { None }; let icc_profile = ctx .icc_colour_information() .map(|x| x.ok().unwrap_or_default()) .map(|x| x.to_vec()); match picture.bit_depth() { 8 => (), 10 | 12 => (), _ => { return ImageResult::Err(ImageError::Decoding(DecodingError::new( ImageFormatHint::Exact(ImageFormat::Avif), format!( "Avif format does not support {} bit depth", picture.bit_depth() ), ))) } }; Ok(AvifDecoder { inner: PhantomData, picture, alpha_picture, icc_profile, }) } } /// Reshaping incorrectly aligned or sized FFI data into Rust constraints fn reshape_plane(source: &[u8], stride: usize, width: usize, height: usize) -> Vec { let mut target_plane = vec![0u16; width * height]; for (shaped_row, src_row) in target_plane .chunks_exact_mut(width) .zip(source.chunks_exact(stride)) { for (dst, src) in shaped_row.iter_mut().zip(src_row.chunks_exact(2)) { *dst = u16::from_ne_bytes([src[0], src[1]]); } } target_plane } struct Plane16View<'a> { data: std::borrow::Cow<'a, [u16]>, stride: usize, } impl Default for Plane16View<'_> { fn default() -> Self { Plane16View { data: std::borrow::Cow::Owned(vec![]), stride: 0, } } } /// This is correct to transmute FFI data for Y plane and Alpha plane fn transmute_y_plane16( plane: &dav1d::Plane, stride: usize, width: usize, height: usize, ) -> Plane16View { let mut y_plane_stride = stride >> 1; let mut bind_y = vec![]; let plane_ref = plane.as_ref(); let mut shape_y_plane = || { y_plane_stride = width; bind_y = reshape_plane(plane_ref, stride, width, height); }; if stride & 1 == 0 { match bytemuck::try_cast_slice(plane_ref) { Ok(slice) => Plane16View { data: std::borrow::Cow::Borrowed(slice), stride: y_plane_stride, }, Err(_) => { shape_y_plane(); Plane16View { data: std::borrow::Cow::Owned(bind_y), stride: y_plane_stride, } } } } else { shape_y_plane(); Plane16View { data: std::borrow::Cow::Owned(bind_y), stride: y_plane_stride, } } } /// This is correct to transmute FFI data for Y plane and Alpha plane fn transmute_chroma_plane16( plane: &dav1d::Plane, pixel_layout: PixelLayout, stride: usize, width: usize, height: usize, ) -> Plane16View { let plane_ref = plane.as_ref(); let mut chroma_plane_stride = stride >> 1; let mut bind_chroma = vec![]; let mut shape_chroma_plane = || { chroma_plane_stride = match pixel_layout { PixelLayout::I400 => unreachable!(), PixelLayout::I420 | PixelLayout::I422 => (width + 1) / 2, PixelLayout::I444 => width, }; let u_plane_height = match pixel_layout { PixelLayout::I400 => unreachable!(), PixelLayout::I420 => (height + 1) / 2, PixelLayout::I422 | PixelLayout::I444 => height, }; bind_chroma = reshape_plane(plane_ref, stride, chroma_plane_stride, u_plane_height); }; if stride & 1 == 0 { match bytemuck::try_cast_slice(plane_ref) { Ok(slice) => Plane16View { data: std::borrow::Cow::Borrowed(slice), stride: chroma_plane_stride, }, Err(_) => { shape_chroma_plane(); Plane16View { data: std::borrow::Cow::Owned(bind_chroma), stride: chroma_plane_stride, } } } } else { shape_chroma_plane(); Plane16View { data: std::borrow::Cow::Owned(bind_chroma), stride: chroma_plane_stride, } } } /// Getting one of prebuilt matrix of fails fn get_matrix( david_matrix: dav1d::pixel::MatrixCoefficients, ) -> Result { match david_matrix { dav1d::pixel::MatrixCoefficients::Identity => Ok(YuvStandardMatrix::Identity), dav1d::pixel::MatrixCoefficients::BT709 => Ok(YuvStandardMatrix::Bt709), // This is arguable, some applications prefer to go with Bt.709 as default, // and some applications prefer Bt.601 as default. // For ex. `Chrome` always prefer Bt.709 even for SD content // However, nowadays standard should be Bt.709 for HD+ size otherwise Bt.601 dav1d::pixel::MatrixCoefficients::Unspecified => Ok(YuvStandardMatrix::Bt709), dav1d::pixel::MatrixCoefficients::Reserved => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Avif.into(), UnsupportedErrorKind::GenericFeature( "Using 'Reserved' color matrix is not supported".to_string(), ), ), )), dav1d::pixel::MatrixCoefficients::BT470M => Ok(YuvStandardMatrix::Bt470_6), dav1d::pixel::MatrixCoefficients::BT470BG => Ok(YuvStandardMatrix::Bt601), dav1d::pixel::MatrixCoefficients::ST170M => Ok(YuvStandardMatrix::Smpte240), dav1d::pixel::MatrixCoefficients::ST240M => Ok(YuvStandardMatrix::Smpte240), // This is an experimental matrix in libavif yet. dav1d::pixel::MatrixCoefficients::YCgCo => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Avif.into(), UnsupportedErrorKind::GenericFeature("YCgCo matrix is not supported".to_string()), ), )), dav1d::pixel::MatrixCoefficients::BT2020NonConstantLuminance => { Ok(YuvStandardMatrix::Bt2020) } dav1d::pixel::MatrixCoefficients::BT2020ConstantLuminance => { // This matrix significantly differs from others because linearize values is required // to compute Y instead of Y'. // Actually it is almost everywhere is not implemented. // Libavif + libheif missing this also so actually AVIF images // with CL BT.2020 might be made only by mistake Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Avif.into(), UnsupportedErrorKind::GenericFeature( "BT2020ConstantLuminance matrix is not supported".to_string(), ), ), )) } dav1d::pixel::MatrixCoefficients::ST2085 => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Avif.into(), UnsupportedErrorKind::GenericFeature("ST2085 matrix is not supported".to_string()), ), )), dav1d::pixel::MatrixCoefficients::ChromaticityDerivedConstantLuminance | dav1d::pixel::MatrixCoefficients::ChromaticityDerivedNonConstantLuminance => Err( ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormat::Avif.into(), UnsupportedErrorKind::GenericFeature( "Chromaticity Derived Luminance matrix is not supported".to_string(), ), )), ), dav1d::pixel::MatrixCoefficients::ICtCp => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Avif.into(), UnsupportedErrorKind::GenericFeature( "ICtCp Derived Luminance matrix is not supported".to_string(), ), ), )), } } impl ImageDecoder for AvifDecoder { fn dimensions(&self) -> (u32, u32) { (self.picture.width(), self.picture.height()) } fn color_type(&self) -> ColorType { if self.picture.bit_depth() == 8 { ColorType::Rgba8 } else { ColorType::Rgba16 } } fn icc_profile(&mut self) -> ImageResult>> { Ok(self.icc_profile.clone()) } fn read_image(self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); let bit_depth = self.picture.bit_depth(); // Normally this should never happen, // if this happens then there is an incorrect implementation somewhere else assert!(bit_depth == 8 || bit_depth == 10 || bit_depth == 12); let (width, height) = self.dimensions(); // This is suspicious if this happens, better fail early if width == 0 || height == 0 { return Err(ImageError::Limits(LimitError::from_kind( LimitErrorKind::DimensionError, ))); } let yuv_range = match self.picture.color_range() { dav1d::pixel::YUVRange::Limited => YuvIntensityRange::Tv, dav1d::pixel::YUVRange::Full => YuvIntensityRange::Pc, }; let color_matrix = get_matrix(self.picture.matrix_coefficients())?; // Identity matrix should be possible only on 4:4:4 if color_matrix == YuvStandardMatrix::Identity && self.picture.pixel_layout() != PixelLayout::I444 { return Err(ImageError::Decoding(DecodingError::new( ImageFormat::Avif.into(), AvifDecoderError::YuvLayoutOnIdentityMatrix(self.picture.pixel_layout()), ))); } if bit_depth == 8 { let ref_y = self.picture.plane(PlanarImageComponent::Y); let ref_u = self.picture.plane(PlanarImageComponent::U); let ref_v = self.picture.plane(PlanarImageComponent::V); let image = YuvPlanarImage { y_plane: ref_y.as_ref(), y_stride: self.picture.stride(PlanarImageComponent::Y) as usize, u_plane: ref_u.as_ref(), u_stride: self.picture.stride(PlanarImageComponent::U) as usize, v_plane: ref_v.as_ref(), v_stride: self.picture.stride(PlanarImageComponent::V) as usize, width: width as usize, height: height as usize, }; let worker = match self.picture.pixel_layout() { PixelLayout::I400 => yuv400_to_rgba8, PixelLayout::I420 => yuv420_to_rgba8, PixelLayout::I422 => yuv422_to_rgba8, PixelLayout::I444 => yuv444_to_rgba8, }; worker(image, buf, yuv_range, color_matrix)?; // Squashing alpha plane into a picture if let Some(picture) = self.alpha_picture { if picture.pixel_layout() != PixelLayout::I400 { return Err(ImageError::Decoding(DecodingError::new( ImageFormat::Avif.into(), AvifDecoderError::AlphaPlaneFormat(picture.pixel_layout()), ))); } let stride = picture.stride(PlanarImageComponent::Y) as usize; let plane = picture.plane(PlanarImageComponent::Y); for (buf, slice) in Iterator::zip( buf.chunks_exact_mut(width as usize * 4), plane.as_ref().chunks_exact(stride), ) { for (rgba, a_src) in buf.chunks_exact_mut(4).zip(slice) { rgba[3] = *a_src; } } } } else { // // 8+ bit-depth case if let Ok(buf) = bytemuck::try_cast_slice_mut(buf) { let target_slice: &mut [u16] = buf; self.process_16bit_picture(target_slice, yuv_range, color_matrix)?; } else { // If buffer from Decoder is unaligned let mut aligned_store = vec![0u16; buf.len() / 2]; self.process_16bit_picture(&mut aligned_store, yuv_range, color_matrix)?; for (dst, src) in buf.chunks_exact_mut(2).zip(aligned_store.iter()) { let bytes = src.to_ne_bytes(); dst[0] = bytes[0]; dst[1] = bytes[1]; } } } Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } impl AvifDecoder { fn process_16bit_picture( &self, target: &mut [u16], yuv_range: YuvIntensityRange, color_matrix: YuvStandardMatrix, ) -> ImageResult<()> { let y_dav1d_plane = self.picture.plane(PlanarImageComponent::Y); let (width, height) = (self.picture.width(), self.picture.height()); let bit_depth = self.picture.bit_depth(); // dav1d may return not aligned and not correctly constrained data, // or at least I can't find guarantees on that // so if it is happened, instead casting we'll need to reshape it into a target slice // required criteria: bytemuck allows this align of this data, and stride must be dividable by 2 let y_plane_view = transmute_y_plane16( &y_dav1d_plane, self.picture.stride(PlanarImageComponent::Y) as usize, width as usize, height as usize, ); let u_dav1d_plane = self.picture.plane(PlanarImageComponent::U); let v_dav1d_plane = self.picture.plane(PlanarImageComponent::V); let mut u_plane_view = Plane16View::default(); let mut v_plane_view = Plane16View::default(); if self.picture.pixel_layout() != PixelLayout::I400 { u_plane_view = transmute_chroma_plane16( &u_dav1d_plane, self.picture.pixel_layout(), self.picture.stride(PlanarImageComponent::U) as usize, width as usize, height as usize, ); v_plane_view = transmute_chroma_plane16( &v_dav1d_plane, self.picture.pixel_layout(), self.picture.stride(PlanarImageComponent::V) as usize, width as usize, height as usize, ); } let image = YuvPlanarImage { y_plane: y_plane_view.data.as_ref(), y_stride: y_plane_view.stride, u_plane: u_plane_view.data.as_ref(), u_stride: u_plane_view.stride, v_plane: v_plane_view.data.as_ref(), v_stride: v_plane_view.stride, width: width as usize, height: height as usize, }; let worker = match self.picture.pixel_layout() { PixelLayout::I400 => { if bit_depth == 10 { yuv400_to_rgba10 } else { yuv400_to_rgba12 } } PixelLayout::I420 => { if bit_depth == 10 { yuv420_to_rgba10 } else { yuv420_to_rgba12 } } PixelLayout::I422 => { if bit_depth == 10 { yuv422_to_rgba10 } else { yuv422_to_rgba12 } } PixelLayout::I444 => { if bit_depth == 10 { yuv444_to_rgba10 } else { yuv444_to_rgba12 } } }; worker(image, target, yuv_range, color_matrix)?; // Squashing alpha plane into a picture if let Some(picture) = &self.alpha_picture { if picture.pixel_layout() != PixelLayout::I400 { return Err(ImageError::Decoding(DecodingError::new( ImageFormat::Avif.into(), AvifDecoderError::AlphaPlaneFormat(picture.pixel_layout()), ))); } let a_dav1d_plane = picture.plane(PlanarImageComponent::Y); let a_plane_view = transmute_y_plane16( &a_dav1d_plane, picture.stride(PlanarImageComponent::Y) as usize, width as usize, height as usize, ); for (buf, slice) in Iterator::zip( target.chunks_exact_mut(width as usize * 4), a_plane_view.data.as_ref().chunks_exact(a_plane_view.stride), ) { for (rgba, a_src) in buf.chunks_exact_mut(4).zip(slice) { rgba[3] = *a_src; } } } // Expand current bit depth to target 16 let target_expand_bits = 16u32 - self.picture.bit_depth() as u32; for item in target.iter_mut() { *item = (*item << target_expand_bits) | (*item >> (16 - target_expand_bits)); } Ok(()) } } /// `get_picture` and `send_pending_data` yield `Again` as a non-fatal error requesting more data is sent to the decoder /// This ensures that in the case of `Again` all pending data is submitted /// This should be called after `send_data` (which does not yield `Again` when called the first time) fn read_until_ready(decoder: &mut dav1d::Decoder) -> ImageResult { loop { match decoder.get_picture() { Err(dav1d::Error::Again) => match decoder.send_pending_data() { Ok(_) => {} Err(dav1d::Error::Again) => {} Err(e) => return Err(error_map(e)), }, r => return r.map_err(error_map), } } } image-0.25.5/src/codecs/avif/encoder.rs000064400000000000000000000261201046102023000157410ustar 00000000000000//! Encoding of AVIF images. /// /// The [AVIF] specification defines an image derivative of the AV1 bitstream, an open video codec. /// /// [AVIF]: https://aomediacodec.github.io/av1-avif/ use std::borrow::Cow; use std::cmp::min; use std::io::Write; use std::mem::size_of; use crate::buffer::ConvertBuffer; use crate::color::{FromColor, Luma, LumaA, Rgb, Rgba}; use crate::error::{ EncodingError, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::{ExtendedColorType, ImageBuffer, ImageEncoder, ImageFormat, Pixel}; use crate::{ImageError, ImageResult}; use bytemuck::{try_cast_slice, try_cast_slice_mut, Pod, PodCastError}; use num_traits::Zero; use ravif::{Encoder, Img, RGB8, RGBA8}; use rgb::AsPixels; /// AVIF Encoder. /// /// Writes one image into the chosen output. pub struct AvifEncoder { inner: W, encoder: Encoder, } /// An enumeration over supported AVIF color spaces #[derive(Debug, Copy, Clone, PartialEq, Eq)] #[non_exhaustive] pub enum ColorSpace { /// sRGB colorspace Srgb, /// BT.709 colorspace Bt709, } impl ColorSpace { fn to_ravif(self) -> ravif::ColorSpace { match self { Self::Srgb => ravif::ColorSpace::RGB, Self::Bt709 => ravif::ColorSpace::YCbCr, } } } enum RgbColor<'buf> { Rgb8(Img<&'buf [RGB8]>), Rgba8(Img<&'buf [RGBA8]>), } impl AvifEncoder { /// Create a new encoder that writes its output to `w`. pub fn new(w: W) -> Self { AvifEncoder::new_with_speed_quality(w, 4, 80) // `cavif` uses these defaults } /// Create a new encoder with a specified speed and quality that writes its output to `w`. /// `speed` accepts a value in the range 1-10, where 1 is the slowest and 10 is the fastest. /// Slower speeds generally yield better compression results. /// `quality` accepts a value in the range 1-100, where 1 is the worst and 100 is the best. pub fn new_with_speed_quality(w: W, speed: u8, quality: u8) -> Self { // Clamp quality and speed to range let quality = min(quality, 100); let speed = min(speed, 10); let encoder = Encoder::new() .with_quality(f32::from(quality)) .with_alpha_quality(f32::from(quality)) .with_speed(speed) .with_depth(Some(8)); AvifEncoder { inner: w, encoder } } /// Encode with the specified `color_space`. pub fn with_colorspace(mut self, color_space: ColorSpace) -> Self { self.encoder = self .encoder .with_internal_color_space(color_space.to_ravif()); self } /// Configures `rayon` thread pool size. /// The default `None` is to use all threads in the default `rayon` thread pool. pub fn with_num_threads(mut self, num_threads: Option) -> Self { self.encoder = self.encoder.with_num_threads(num_threads); self } } impl ImageEncoder for AvifEncoder { /// Encode image data with the indicated color type. /// /// The encoder currently requires all data to be RGBA8, it will be converted internally if /// necessary. When data is suitably aligned, i.e. u16 channels to two bytes, then the /// conversion may be more efficient. #[track_caller] fn write_image( mut self, data: &[u8], width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color.buffer_size(width, height); assert_eq!( expected_buffer_len, data.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", data.len(), ); self.set_color(color); // `ravif` needs strongly typed data so let's convert. We can either use a temporarily // owned version in our own buffer or zero-copy if possible by using the input buffer. // This requires going through `rgb`. let mut fallback = vec![]; // This vector is used if we need to do a color conversion. let result = match Self::encode_as_img(&mut fallback, data, width, height, color)? { RgbColor::Rgb8(buffer) => self.encoder.encode_rgb(buffer), RgbColor::Rgba8(buffer) => self.encoder.encode_rgba(buffer), }; let data = result.map_err(|err| { ImageError::Encoding(EncodingError::new(ImageFormat::Avif.into(), err)) })?; self.inner.write_all(&data.avif_file)?; Ok(()) } } impl AvifEncoder { // Does not currently do anything. Mirrors behaviour of old config function. fn set_color(&mut self, _color: ExtendedColorType) { // self.config.color_space = ColorSpace::RGB; } fn encode_as_img<'buf>( fallback: &'buf mut Vec, data: &'buf [u8], width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult> { // Error wrapping utility for color dependent buffer dimensions. fn try_from_raw( data: &[P::Subpixel], width: u32, height: u32, ) -> ImageResult> { ImageBuffer::from_raw(width, height, data).ok_or_else(|| { ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, )) }) } // Convert to target color type using few buffer allocations. fn convert_into<'buf, P>( buf: &'buf mut Vec, image: ImageBuffer, ) -> Img<&'buf [RGBA8]> where P: Pixel + 'static, Rgba: FromColor

, { let (width, height) = image.dimensions(); // TODO: conversion re-using the target buffer? let image: ImageBuffer, _> = image.convert(); *buf = image.into_raw(); Img::new(buf.as_pixels(), width as usize, height as usize) } // Cast the input slice using few buffer allocations if possible. // In particular try not to allocate if the caller did the infallible reverse. fn cast_buffer(buf: &[u8]) -> ImageResult> where Channel: Pod + Zero, { match try_cast_slice(buf) { Ok(slice) => Ok(Cow::Borrowed(slice)), Err(PodCastError::OutputSliceWouldHaveSlop) => Err(ImageError::Parameter( ParameterError::from_kind(ParameterErrorKind::DimensionMismatch), )), Err(PodCastError::TargetAlignmentGreaterAndInputNotAligned) => { // Sad, but let's allocate. // bytemuck checks alignment _before_ slop but size mismatch before this.. if buf.len() % size_of::() != 0 { Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))) } else { let len = buf.len() / size_of::(); let mut data = vec![Channel::zero(); len]; let view = try_cast_slice_mut::<_, u8>(data.as_mut_slice()).unwrap(); view.copy_from_slice(buf); Ok(Cow::Owned(data)) } } Err(err) => { // Are you trying to encode a ZST?? Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic(format!("{err:?}")), ))) } } } match color { ExtendedColorType::Rgb8 => { // ravif doesn't do any checks but has some asserts, so we do the checks. let img = try_from_raw::>(data, width, height)?; // Now, internally ravif uses u32 but it takes usize. We could do some checked // conversion but instead we use that a non-empty image must be addressable. if img.pixels().len() == 0 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } Ok(RgbColor::Rgb8(Img::new( AsPixels::as_pixels(data), width as usize, height as usize, ))) } ExtendedColorType::Rgba8 => { // ravif doesn't do any checks but has some asserts, so we do the checks. let img = try_from_raw::>(data, width, height)?; // Now, internally ravif uses u32 but it takes usize. We could do some checked // conversion but instead we use that a non-empty image must be addressable. if img.pixels().len() == 0 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } Ok(RgbColor::Rgba8(Img::new( AsPixels::as_pixels(data), width as usize, height as usize, ))) } // we need a separate buffer.. ExtendedColorType::L8 => { let image = try_from_raw::>(data, width, height)?; Ok(RgbColor::Rgba8(convert_into(fallback, image))) } ExtendedColorType::La8 => { let image = try_from_raw::>(data, width, height)?; Ok(RgbColor::Rgba8(convert_into(fallback, image))) } // we need to really convert data.. ExtendedColorType::L16 => { let buffer = cast_buffer(data)?; let image = try_from_raw::>(&buffer, width, height)?; Ok(RgbColor::Rgba8(convert_into(fallback, image))) } ExtendedColorType::La16 => { let buffer = cast_buffer(data)?; let image = try_from_raw::>(&buffer, width, height)?; Ok(RgbColor::Rgba8(convert_into(fallback, image))) } ExtendedColorType::Rgb16 => { let buffer = cast_buffer(data)?; let image = try_from_raw::>(&buffer, width, height)?; Ok(RgbColor::Rgba8(convert_into(fallback, image))) } ExtendedColorType::Rgba16 => { let buffer = cast_buffer(data)?; let image = try_from_raw::>(&buffer, width, height)?; Ok(RgbColor::Rgba8(convert_into(fallback, image))) } // for cases we do not support at all? _ => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Avif.into(), UnsupportedErrorKind::Color(color), ), )), } } } image-0.25.5/src/codecs/avif/mod.rs000064400000000000000000000007121046102023000151000ustar 00000000000000//! Encoding of AVIF images. /// /// The [AVIF] specification defines an image derivative of the AV1 bitstream, an open video codec. /// /// [AVIF]: https://aomediacodec.github.io/av1-avif/ #[cfg(feature = "avif-native")] pub use self::decoder::AvifDecoder; #[cfg(feature = "avif")] pub use self::encoder::{AvifEncoder, ColorSpace}; #[cfg(feature = "avif-native")] mod decoder; #[cfg(feature = "avif")] mod encoder; #[cfg(feature = "avif-native")] mod yuv; image-0.25.5/src/codecs/avif/yuv.rs000064400000000000000000001140531046102023000151500ustar 00000000000000use crate::error::DecodingError; use crate::{ImageError, ImageFormat}; use num_traits::AsPrimitive; use std::fmt::{Display, Formatter}; #[derive(Debug, Copy, Clone)] /// Representation of inversion matrix struct CbCrInverseTransform { pub y_coef: T, pub cr_coef: T, pub cb_coef: T, pub g_coeff_1: T, pub g_coeff_2: T, } impl CbCrInverseTransform { fn to_integers(self, precision: u32) -> CbCrInverseTransform { let precision_scale: i32 = 1i32 << (precision as i32); let cr_coef = (self.cr_coef * precision_scale as f32) as i32; let cb_coef = (self.cb_coef * precision_scale as f32) as i32; let y_coef = (self.y_coef * precision_scale as f32) as i32; let g_coef_1 = (self.g_coeff_1 * precision_scale as f32) as i32; let g_coef_2 = (self.g_coeff_2 * precision_scale as f32) as i32; CbCrInverseTransform:: { y_coef, cr_coef, cb_coef, g_coeff_1: g_coef_1, g_coeff_2: g_coef_2, } } } #[derive(Copy, Clone, Debug)] struct ErrorSize { expected: usize, received: usize, } #[derive(Copy, Clone, Debug)] enum PlaneDefinition { Y, U, V, } impl Display for PlaneDefinition { fn fmt(&self, f: &mut Formatter) -> std::fmt::Result { match self { PlaneDefinition::Y => f.write_str("Luma"), PlaneDefinition::U => f.write_str("U chroma"), PlaneDefinition::V => f.write_str("V chroma"), } } } #[derive(Debug, Clone, Copy)] enum YuvConversionError { YuvPlaneSizeMismatch(PlaneDefinition, ErrorSize), RgbDestinationSizeMismatch(ErrorSize), } impl Display for YuvConversionError { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { match self { YuvConversionError::YuvPlaneSizeMismatch(plane, error_size) => { f.write_fmt(format_args!( "For plane {} expected size is {} but was received {}", plane, error_size.received, error_size.expected, )) } YuvConversionError::RgbDestinationSizeMismatch(error_size) => { f.write_fmt(format_args!( "For RGB destination expected size is {} but was received {}", error_size.received, error_size.expected, )) } } } } impl std::error::Error for YuvConversionError {} #[inline] fn check_yuv_plane_preconditions( plane: &[V], plane_definition: PlaneDefinition, stride: usize, height: usize, ) -> Result<(), ImageError> { if plane.len() != stride * height { return Err(ImageError::Decoding(DecodingError::new( ImageFormat::Avif.into(), YuvConversionError::YuvPlaneSizeMismatch( plane_definition, ErrorSize { expected: stride * height, received: plane.len(), }, ), ))); } Ok(()) } #[inline] fn check_rgb_preconditions( rgb_data: &[V], stride: usize, height: usize, ) -> Result<(), ImageError> { if rgb_data.len() != stride * height { return Err(ImageError::Decoding(DecodingError::new( ImageFormat::Avif.into(), YuvConversionError::RgbDestinationSizeMismatch(ErrorSize { expected: stride * height, received: rgb_data.len(), }), ))); } Ok(()) } /// Transformation YUV to RGB with coefficients as specified in [ITU-R](https://www.itu.int/rec/T-REC-H.273/en) fn get_inverse_transform( range_bgra: u32, range_y: u32, range_uv: u32, kr: f32, kb: f32, precision: u32, ) -> CbCrInverseTransform { let range_uv = range_bgra as f32 / range_uv as f32; let y_coef = range_bgra as f32 / range_y as f32; let cr_coeff = (2f32 * (1f32 - kr)) * range_uv; let cb_coeff = (2f32 * (1f32 - kb)) * range_uv; let kg = 1.0f32 - kr - kb; assert_ne!(kg, 0., "1.0f - kr - kg must not be 0"); let g_coeff_1 = (2f32 * ((1f32 - kr) * kr / kg)) * range_uv; let g_coeff_2 = (2f32 * ((1f32 - kb) * kb / kg)) * range_uv; let exact_transform = CbCrInverseTransform { y_coef, cr_coef: cr_coeff, cb_coef: cb_coeff, g_coeff_1, g_coeff_2, }; exact_transform.to_integers(precision) } #[derive(Debug, Copy, Clone, PartialOrd, PartialEq)] /// Declares YUV range TV (limited) or PC (full), /// more info [ITU-R](https://www.itu.int/rec/T-REC-H.273/en) pub(crate) enum YuvIntensityRange { /// Limited range Y ∈ [16 << (depth - 8), 16 << (depth - 8) + 224 << (depth - 8)], /// UV ∈ [-1 << (depth - 1), -1 << (depth - 1) + 1 << (depth - 1)] Tv, /// Full range Y ∈ [0, 2^bit_depth - 1], /// UV ∈ [-1 << (depth - 1), -1 << (depth - 1) + 2^bit_depth - 1] Pc, } #[derive(Debug, Copy, Clone, PartialOrd, PartialEq)] struct YuvChromaRange { pub bias_y: u32, pub bias_uv: u32, pub range_y: u32, pub range_uv: u32, pub range: YuvIntensityRange, } impl YuvIntensityRange { const fn get_yuv_range(self, depth: u32) -> YuvChromaRange { match self { YuvIntensityRange::Tv => YuvChromaRange { bias_y: 16 << (depth - 8), bias_uv: 1 << (depth - 1), range_y: 219 << (depth - 8), range_uv: 224 << (depth - 8), range: self, }, YuvIntensityRange::Pc => YuvChromaRange { bias_y: 0, bias_uv: 1 << (depth - 1), range_uv: (1 << depth) - 1, range_y: (1 << depth) - 1, range: self, }, } } } #[derive(Debug, Copy, Clone, PartialOrd, PartialEq)] /// Declares standard prebuilt YUV conversion matrices, /// check [ITU-R](https://www.itu.int/rec/T-REC-H.273/en) information for more info pub(crate) enum YuvStandardMatrix { Bt601, Bt709, Bt2020, Smpte240, Bt470_6, Identity, } #[derive(Debug, Copy, Clone, PartialOrd, PartialEq)] struct YuvBias { kr: f32, kb: f32, } impl YuvStandardMatrix { const fn get_kr_kb(self) -> YuvBias { match self { YuvStandardMatrix::Bt601 => YuvBias { kr: 0.299f32, kb: 0.114f32, }, YuvStandardMatrix::Bt709 => YuvBias { kr: 0.2126f32, kb: 0.0722f32, }, YuvStandardMatrix::Bt2020 => YuvBias { kr: 0.2627f32, kb: 0.0593f32, }, YuvStandardMatrix::Smpte240 => YuvBias { kr: 0.087f32, kb: 0.212f32, }, YuvStandardMatrix::Bt470_6 => YuvBias { kr: 0.2220f32, kb: 0.0713f32, }, YuvStandardMatrix::Identity => unreachable!(), } } } pub(crate) struct YuvPlanarImage<'a, T> { pub(crate) y_plane: &'a [T], pub(crate) y_stride: usize, pub(crate) u_plane: &'a [T], pub(crate) u_stride: usize, pub(crate) v_plane: &'a [T], pub(crate) v_stride: usize, pub(crate) width: usize, pub(crate) height: usize, } #[inline(always)] /// Saturating rounding shift right against bit depth fn qrshr(val: i32) -> i32 { let rounding: i32 = 1 << (PRECISION - 1); let max_value: i32 = (1 << BIT_DEPTH) - 1; ((val + rounding) >> PRECISION).clamp(0, max_value) } /// Converts Yuv 400 planar format 8 bit to Rgba 8 bit /// /// # Arguments /// /// * `image`: see [YuvGrayImage] /// * `rgba`: RGBA image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv400_to_rgba8( image: YuvPlanarImage, rgba: &mut [u8], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv400_to_rgbx_impl::(image, rgba, range, matrix) } /// Converts Yuv 400 planar format 10 bit to Rgba 10 bit /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvGrayImage] /// * `rgba`: RGBA image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv400_to_rgba10( image: YuvPlanarImage, rgba: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv400_to_rgbx_impl::(image, rgba, range, matrix) } /// Converts Yuv 400 planar format 12 bit to Rgba 12 bit /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvGrayImage] /// * `rgba`: RGBA image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv400_to_rgba12( image: YuvPlanarImage, rgba: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv400_to_rgbx_impl::(image, rgba, range, matrix) } /// Converts Yuv 400 planar format to Rgba /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvGrayImage] /// * `rgba`: RGBA image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// #[inline] fn yuv400_to_rgbx_impl< V: Copy + AsPrimitive + 'static + Sized, const CHANNELS: usize, const BIT_DEPTH: usize, >( image: YuvPlanarImage, rgba: &mut [V], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> where i32: AsPrimitive, { assert!( CHANNELS == 3 || CHANNELS == 4, "YUV 4:0:0 -> RGB is implemented only on 3 and 4 channels" ); assert!( (8..=16).contains(&BIT_DEPTH), "Invalid bit depth is provided" ); assert!( if BIT_DEPTH > 8 { size_of::() == 2 } else { size_of::() == 1 }, "Unsupported bit depth and data type combination" ); assert_ne!( matrix, YuvStandardMatrix::Identity, "Identity matrix cannot be used on 4:0:0" ); let y_plane = image.y_plane; let y_stride = image.y_stride; let height = image.height; let width = image.width; check_yuv_plane_preconditions(y_plane, PlaneDefinition::Y, y_stride, height)?; check_rgb_preconditions(rgba, width * CHANNELS, height)?; let rgba_stride = width * CHANNELS; let max_value = (1 << BIT_DEPTH) - 1; // If luma plane is in full range it can be just redistributed across the image if range == YuvIntensityRange::Pc { let y_iter = y_plane.chunks_exact(y_stride); let rgb_iter = rgba.chunks_exact_mut(rgba_stride); // All branches on generic const will be optimized out. for (y_src, rgb) in y_iter.zip(rgb_iter) { let rgb_chunks = rgb.chunks_exact_mut(CHANNELS); for (y_src, rgb_dst) in y_src.iter().zip(rgb_chunks) { let r = *y_src; rgb_dst[0] = r; rgb_dst[1] = r; rgb_dst[2] = r; if CHANNELS == 4 { rgb_dst[3] = max_value.as_(); } } } return Ok(()); } let range = range.get_yuv_range(BIT_DEPTH as u32); let kr_kb = matrix.get_kr_kb(); const PRECISION: i32 = 11; let inverse_transform = get_inverse_transform( (1 << BIT_DEPTH) - 1, range.range_y, range.range_uv, kr_kb.kr, kr_kb.kb, PRECISION as u32, ); let y_coef = inverse_transform.y_coef; let bias_y = range.bias_y as i32; let y_iter = y_plane.chunks_exact(y_stride); let rgb_iter = rgba.chunks_exact_mut(rgba_stride); // All branches on generic const will be optimized out. for (y_src, rgb) in y_iter.zip(rgb_iter) { let rgb_chunks = rgb.chunks_exact_mut(CHANNELS); for (y_src, rgb_dst) in y_src.iter().zip(rgb_chunks) { let y_value = (y_src.as_() - bias_y) * y_coef; let r = qrshr::(y_value); rgb_dst[0] = r.as_(); rgb_dst[1] = r.as_(); rgb_dst[2] = r.as_(); if CHANNELS == 4 { rgb_dst[3] = max_value.as_(); } } } Ok(()) } /// Converts YUV420 8 bit-depth to Rgba 8 bit /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv420_to_rgba8( image: YuvPlanarImage, rgb: &mut [u8], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv420_to_rgbx::(image, rgb, range, matrix) } /// Converts YUV420 10 bit-depth to Rgba 10 bit-depth /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv420_to_rgba10( image: YuvPlanarImage, rgb: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv420_to_rgbx::(image, rgb, range, matrix) } /// Converts YUV420 12 bit-depth to Rgba 12 bit-depth /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv420_to_rgba12( image: YuvPlanarImage, rgb: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv420_to_rgbx::(image, rgb, range, matrix) } #[inline] fn process_halved_chroma_row< V: Copy + AsPrimitive + 'static + Sized, const PRECISION: i32, const CHANNELS: usize, const BIT_DEPTH: usize, >( image: YuvPlanarImage, rgba: &mut [V], transform: &CbCrInverseTransform, range: &YuvChromaRange, ) where i32: AsPrimitive, { let cr_coef = transform.cr_coef; let cb_coef = transform.cb_coef; let y_coef = transform.y_coef; let g_coef_1 = transform.g_coeff_1; let g_coef_2 = transform.g_coeff_2; let max_value = (1 << BIT_DEPTH) - 1; let bias_y = range.bias_y as i32; let bias_uv = range.bias_uv as i32; let y_iter = image.y_plane.chunks_exact(2); let rgb_chunks = rgba.chunks_exact_mut(CHANNELS * 2); for (((y_src, &u_src), &v_src), rgb_dst) in y_iter.zip(image.u_plane).zip(image.v_plane).zip(rgb_chunks) { let y_value: i32 = (y_src[0].as_() - bias_y) * y_coef; let cb_value: i32 = u_src.as_() - bias_uv; let cr_value: i32 = v_src.as_() - bias_uv; let r = qrshr::(y_value + cr_coef * cr_value); let b = qrshr::(y_value + cb_coef * cb_value); let g = qrshr::(y_value - g_coef_1 * cr_value - g_coef_2 * cb_value); if CHANNELS == 4 { rgb_dst[0] = r.as_(); rgb_dst[1] = g.as_(); rgb_dst[2] = b.as_(); rgb_dst[3] = max_value.as_(); } else if CHANNELS == 3 { rgb_dst[0] = r.as_(); rgb_dst[1] = g.as_(); rgb_dst[2] = b.as_(); } else { unreachable!(); } let y_value = (y_src[1].as_() - bias_y) * y_coef; let r = qrshr::(y_value + cr_coef * cr_value); let b = qrshr::(y_value + cb_coef * cb_value); let g = qrshr::(y_value - g_coef_1 * cr_value - g_coef_2 * cb_value); if CHANNELS == 4 { rgb_dst[4] = r.as_(); rgb_dst[5] = g.as_(); rgb_dst[6] = b.as_(); rgb_dst[7] = max_value.as_(); } else if CHANNELS == 3 { rgb_dst[3] = r.as_(); rgb_dst[4] = g.as_(); rgb_dst[5] = b.as_(); } else { unreachable!(); } } // Process remainder if width is odd. if image.width & 1 != 0 { let y_left = image.y_plane.chunks_exact(2).remainder(); let rgb_chunks = rgba .chunks_exact_mut(CHANNELS * 2) .into_remainder() .chunks_exact_mut(CHANNELS); let u_iter = image.u_plane.iter().rev(); let v_iter = image.v_plane.iter().rev(); for (((y_src, u_src), v_src), rgb_dst) in y_left.iter().zip(u_iter).zip(v_iter).zip(rgb_chunks) { let y_value = (y_src.as_() - bias_y) * y_coef; let cb_value = u_src.as_() - bias_uv; let cr_value = v_src.as_() - bias_uv; let r = qrshr::(y_value + cr_coef * cr_value); let b = qrshr::(y_value + cb_coef * cb_value); let g = qrshr::(y_value - g_coef_1 * cr_value - g_coef_2 * cb_value); if CHANNELS == 4 { rgb_dst[0] = r.as_(); rgb_dst[1] = g.as_(); rgb_dst[2] = b.as_(); rgb_dst[3] = max_value.as_(); } else if CHANNELS == 3 { rgb_dst[0] = r.as_(); rgb_dst[1] = g.as_(); rgb_dst[2] = b.as_(); } else { unreachable!(); } } } } /// Converts YUV420 to Rgba /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// #[inline] fn yuv420_to_rgbx< V: Copy + AsPrimitive + 'static + Sized, const CHANNELS: usize, const BIT_DEPTH: usize, >( image: YuvPlanarImage, rgb: &mut [V], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> where i32: AsPrimitive, { assert!( CHANNELS == 3 || CHANNELS == 4, "YUV 4:2:0 -> RGB is implemented only on 3 and 4 channels" ); assert!( (8..=16).contains(&BIT_DEPTH), "Invalid bit depth is provided" ); assert!( if BIT_DEPTH > 8 { size_of::() == 2 } else { size_of::() == 1 }, "Unsupported bit depth and data type combination" ); assert_ne!( matrix, YuvStandardMatrix::Identity, "Identity matrix cannot be used on 4:2:0" ); let y_plane = image.y_plane; let u_plane = image.u_plane; let v_plane = image.v_plane; let y_stride = image.y_stride; let u_stride = image.u_stride; let v_stride = image.v_stride; let chroma_height = (image.height + 1) / 2; check_yuv_plane_preconditions(y_plane, PlaneDefinition::Y, y_stride, image.height)?; check_yuv_plane_preconditions(u_plane, PlaneDefinition::U, u_stride, chroma_height)?; check_yuv_plane_preconditions(v_plane, PlaneDefinition::V, v_stride, chroma_height)?; check_rgb_preconditions(rgb, image.width * CHANNELS, image.height)?; const PRECISION: i32 = 11; let range = range.get_yuv_range(BIT_DEPTH as u32); let kr_kb = matrix.get_kr_kb(); let inverse_transform = get_inverse_transform( (1 << BIT_DEPTH) - 1, range.range_y, range.range_uv, kr_kb.kr, kr_kb.kb, PRECISION as u32, ); let rgb_stride = image.width * CHANNELS; let y_iter = y_plane.chunks_exact(y_stride * 2); let rgb_iter = rgb.chunks_exact_mut(rgb_stride * 2); let u_iter = u_plane.chunks_exact(u_stride); let v_iter = v_plane.chunks_exact(v_stride); /* Sample 4x4 YUV420 planar image start_y + 0: Y00 Y01 Y02 Y03 start_y + 4: Y04 Y05 Y06 Y07 start_y + 8: Y08 Y09 Y10 Y11 start_y + 12: Y12 Y13 Y14 Y15 start_cb + 0: Cb00 Cb01 start_cb + 2: Cb02 Cb03 start_cr + 0: Cr00 Cr01 start_cr + 2: Cr02 Cr03 For 4 luma components (2x2 on rows and cols) there are 1 chroma Cb/Cr components. Luma channel must have always exact size as RGB target layout, but chroma is not. We're sectioning an image by pair of rows, then for each pair of luma and RGB row, there is one chroma row. As chroma is shrunk by factor of 2 then we're processing by pairs of RGB and luma, for each RGB and luma pair there is one chroma component. If image have odd width then luma channel must be exact, and we're replicating last chroma component. If image have odd height then luma channel is exact, and we're replicating last chroma rows. */ // All branches on generic const will be optimized out. for (((y_src, u_src), v_src), rgb) in y_iter.zip(u_iter).zip(v_iter).zip(rgb_iter) { // Since we're processing two rows in one loop we need to re-slice once more let y_iter = y_src.chunks_exact(y_stride); let rgb_iter = rgb.chunks_exact_mut(rgb_stride); for (y_src, rgba) in y_iter.zip(rgb_iter) { let image = YuvPlanarImage { y_plane: y_src, y_stride: 0, u_plane: u_src, u_stride: 0, v_plane: v_src, v_stride: 0, width: image.width, height: image.height, }; process_halved_chroma_row::( image, rgba, &inverse_transform, &range, ); } } // Process remainder if height is odd let y_iter = y_plane .chunks_exact(y_stride * 2) .remainder() .chunks_exact(y_stride); let rgb_iter = rgb.chunks_exact_mut(rgb_stride).rev(); let u_iter = u_plane.chunks_exact(u_stride).rev(); let v_iter = v_plane.chunks_exact(v_stride).rev(); for (((y_src, u_src), v_src), rgba) in y_iter.zip(u_iter).zip(v_iter).zip(rgb_iter) { let image = YuvPlanarImage { y_plane: y_src, y_stride: 0, u_plane: u_src, u_stride: 0, v_plane: v_src, v_stride: 0, width: image.width, height: image.height, }; process_halved_chroma_row::( image, rgba, &inverse_transform, &range, ); } Ok(()) } /// Converts Yuv 422 8-bit planar format to Rgba 8-bit /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv422_to_rgba8( image: YuvPlanarImage, rgb: &mut [u8], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv422_to_rgbx_impl::(image, rgb, range, matrix) } /// Converts Yuv 422 10-bit planar format to Rgba 10-bit /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv422_to_rgba10( image: YuvPlanarImage, rgb: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv422_to_rgbx_impl::(image, rgb, range, matrix) } /// Converts Yuv 422 12-bit planar format to Rgba 12-bit /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv422_to_rgba12( image: YuvPlanarImage, rgb: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { yuv422_to_rgbx_impl::(image, rgb, range, matrix) } /// Converts Yuv 422 planar format to Rgba /// /// Stride here is not supports u16 as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// fn yuv422_to_rgbx_impl< V: Copy + AsPrimitive + 'static + Sized, const CHANNELS: usize, const BIT_DEPTH: usize, >( image: YuvPlanarImage, rgb: &mut [V], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> where i32: AsPrimitive, { assert!( CHANNELS == 3 || CHANNELS == 4, "YUV 4:2:2 -> RGB is implemented only on 3 and 4 channels" ); assert!( (8..=16).contains(&BIT_DEPTH), "Invalid bit depth is provided" ); assert!( if BIT_DEPTH > 8 { size_of::() == 2 } else { size_of::() == 1 }, "Unsupported bit depth and data type combination" ); assert_ne!( matrix, YuvStandardMatrix::Identity, "Identity matrix cannot be used on 4:2:2" ); let y_plane = image.y_plane; let u_plane = image.u_plane; let v_plane = image.v_plane; let y_stride = image.y_stride; let u_stride = image.u_stride; let v_stride = image.v_stride; let width = image.width; check_yuv_plane_preconditions(y_plane, PlaneDefinition::Y, y_stride, image.height)?; check_yuv_plane_preconditions(u_plane, PlaneDefinition::U, u_stride, image.height)?; check_yuv_plane_preconditions(v_plane, PlaneDefinition::V, v_stride, image.height)?; check_rgb_preconditions(rgb, image.width * CHANNELS, image.height)?; let range = range.get_yuv_range(BIT_DEPTH as u32); let kr_kb = matrix.get_kr_kb(); const PRECISION: i32 = 11; let inverse_transform = get_inverse_transform( (1 << BIT_DEPTH) - 1, range.range_y, range.range_uv, kr_kb.kr, kr_kb.kb, PRECISION as u32, ); /* Sample 4x4 YUV422 planar image start_y + 0: Y00 Y01 Y02 Y03 start_y + 4: Y04 Y05 Y06 Y07 start_y + 8: Y08 Y09 Y10 Y11 start_y + 12: Y12 Y13 Y14 Y15 start_cb + 0: Cb00 Cb01 start_cb + 2: Cb02 Cb03 start_cb + 4: Cb04 Cb05 start_cb + 6: Cb06 Cb07 start_cr + 0: Cr00 Cr01 start_cr + 2: Cr02 Cr03 start_cr + 4: Cr04 Cr05 start_cr + 6: Cr06 Cr07 For 2 luma components there are 1 chroma Cb/Cr components. Luma channel must have always exact size as RGB target layout, but chroma is not. As chroma is shrunk by factor of 2 then we're processing by pairs of RGB and luma, for each RGB and luma pair there is one chroma component. If image have odd width then luma channel must be exact, and we're replicating last chroma component. */ let rgb_stride = width * CHANNELS; let y_iter = y_plane.chunks_exact(y_stride); let rgb_iter = rgb.chunks_exact_mut(rgb_stride); let u_iter = u_plane.chunks_exact(u_stride); let v_iter = v_plane.chunks_exact(v_stride); // All branches on generic const will be optimized out. for (((y_src, u_src), v_src), rgba) in y_iter.zip(u_iter).zip(v_iter).zip(rgb_iter) { let image = YuvPlanarImage { y_plane: y_src, y_stride: 0, u_plane: u_src, u_stride: 0, v_plane: v_src, v_stride: 0, width: image.width, height: image.height, }; process_halved_chroma_row::( image, rgba, &inverse_transform, &range, ); } Ok(()) } /// Converts Yuv 444 planar format 8 bit-depth to Rgba 8 bit /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgba`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(crate) fn yuv444_to_rgba8( image: YuvPlanarImage, rgba: &mut [u8], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { if matrix == YuvStandardMatrix::Identity { gbr_to_rgba8(image, rgba, range) } else { yuv444_to_rgbx_impl::(image, rgba, range, matrix) } } /// Converts Yuv 444 planar format 10 bit-depth to Rgba 10 bit /// /// Stride here is not supports u16 as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgba`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(super) fn yuv444_to_rgba10( image: YuvPlanarImage, rgba: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { if matrix == YuvStandardMatrix::Identity { gbr_to_rgba10(image, rgba, range) } else { yuv444_to_rgbx_impl::(image, rgba, range, matrix) } } /// Converts Yuv 444 planar format 12 bit-depth to Rgba 12 bit /// /// Stride here is not supports u16 as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgba`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// pub(super) fn yuv444_to_rgba12( image: YuvPlanarImage, rgba: &mut [u16], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> { if matrix == YuvStandardMatrix::Identity { gbr_to_rgba12(image, rgba, range) } else { yuv444_to_rgbx_impl::(image, rgba, range, matrix) } } /// Converts Yuv 444 planar format to Rgba /// /// Stride here is not supports u16 as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgba`: RGB image layout /// * `range`: see [YuvIntensityRange] /// * `matrix`: see [YuvStandardMatrix] /// /// #[inline] fn yuv444_to_rgbx_impl< V: Copy + AsPrimitive + 'static + Sized, const CHANNELS: usize, const BIT_DEPTH: usize, >( image: YuvPlanarImage, rgba: &mut [V], range: YuvIntensityRange, matrix: YuvStandardMatrix, ) -> Result<(), ImageError> where i32: AsPrimitive, { assert!( CHANNELS == 3 || CHANNELS == 4, "YUV 4:4:4 -> RGB is implemented only on 3 and 4 channels" ); assert!( (8..=16).contains(&BIT_DEPTH), "Invalid bit depth is provided" ); assert!( if BIT_DEPTH > 8 { size_of::() == 2 } else { size_of::() == 1 }, "Unsupported bit depth and data type combination" ); let y_plane = image.y_plane; let u_plane = image.u_plane; let v_plane = image.v_plane; let y_stride = image.y_stride; let u_stride = image.u_stride; let v_stride = image.v_stride; let height = image.height; let width = image.width; check_yuv_plane_preconditions(y_plane, PlaneDefinition::Y, y_stride, height)?; check_yuv_plane_preconditions(u_plane, PlaneDefinition::U, u_stride, height)?; check_yuv_plane_preconditions(v_plane, PlaneDefinition::V, v_stride, height)?; check_rgb_preconditions(rgba, image.width * CHANNELS, height)?; let range = range.get_yuv_range(BIT_DEPTH as u32); let kr_kb = matrix.get_kr_kb(); const PRECISION: i32 = 11; let inverse_transform = get_inverse_transform( (1 << BIT_DEPTH) - 1, range.range_y, range.range_uv, kr_kb.kr, kr_kb.kb, PRECISION as u32, ); let cr_coef = inverse_transform.cr_coef; let cb_coef = inverse_transform.cb_coef; let y_coef = inverse_transform.y_coef; let g_coef_1 = inverse_transform.g_coeff_1; let g_coef_2 = inverse_transform.g_coeff_2; let bias_y = range.bias_y as i32; let bias_uv = range.bias_uv as i32; let max_value = (1 << BIT_DEPTH) - 1; let rgb_stride = width * CHANNELS; let y_iter = y_plane.chunks_exact(y_stride); let rgb_iter = rgba.chunks_exact_mut(rgb_stride); let u_iter = u_plane.chunks_exact(u_stride); let v_iter = v_plane.chunks_exact(v_stride); // All branches on generic const will be optimized out. for (((y_src, u_src), v_src), rgb) in y_iter.zip(u_iter).zip(v_iter).zip(rgb_iter) { let rgb_chunks = rgb.chunks_exact_mut(CHANNELS); for (((y_src, u_src), v_src), rgb_dst) in y_src.iter().zip(u_src).zip(v_src).zip(rgb_chunks) { let y_value = (y_src.as_() - bias_y) * y_coef; let cb_value = u_src.as_() - bias_uv; let cr_value = v_src.as_() - bias_uv; let r = qrshr::(y_value + cr_coef * cr_value); let b = qrshr::(y_value + cb_coef * cb_value); let g = qrshr::(y_value - g_coef_1 * cr_value - g_coef_2 * cb_value); if CHANNELS == 4 { rgb_dst[0] = r.as_(); rgb_dst[1] = g.as_(); rgb_dst[2] = b.as_(); rgb_dst[3] = max_value.as_(); } else if CHANNELS == 3 { rgb_dst[0] = r.as_(); rgb_dst[1] = g.as_(); rgb_dst[2] = b.as_(); } else { unreachable!(); } } } Ok(()) } /// Converts Gbr 8 bit planar format to Rgba 8 bit-depth /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// /// fn gbr_to_rgba8( image: YuvPlanarImage, rgb: &mut [u8], range: YuvIntensityRange, ) -> Result<(), ImageError> { gbr_to_rgbx_impl::(image, rgb, range) } /// Converts Gbr 10 bit planar format to Rgba 10 bit-depth /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgba`: RGBx image layout /// * `range`: see [YuvIntensityRange] /// /// fn gbr_to_rgba10( image: YuvPlanarImage, rgba: &mut [u16], range: YuvIntensityRange, ) -> Result<(), ImageError> { gbr_to_rgbx_impl::(image, rgba, range) } /// Converts Gbr 12 bit planar format to Rgba 12 bit-depth /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgba`: RGBx image layout /// * `range`: see [YuvIntensityRange] /// /// fn gbr_to_rgba12( image: YuvPlanarImage, rgba: &mut [u16], range: YuvIntensityRange, ) -> Result<(), ImageError> { gbr_to_rgbx_impl::(image, rgba, range) } /// Converts Gbr planar format to Rgba /// /// Stride here is not supported as it can be in passed from FFI. /// /// # Arguments /// /// * `image`: see [YuvPlanarImage] /// * `rgb`: RGB image layout /// * `range`: see [YuvIntensityRange] /// /// #[inline] fn gbr_to_rgbx_impl< V: Copy + AsPrimitive + 'static + Sized, const CHANNELS: usize, const BIT_DEPTH: usize, >( image: YuvPlanarImage, rgba: &mut [V], yuv_range: YuvIntensityRange, ) -> Result<(), ImageError> where i32: AsPrimitive, { assert!( CHANNELS == 3 || CHANNELS == 4, "GBR -> RGB is implemented only on 3 and 4 channels" ); assert!( (8..=16).contains(&BIT_DEPTH), "Invalid bit depth is provided" ); assert!( if BIT_DEPTH > 8 { size_of::() == 2 } else { size_of::() == 1 }, "Unsupported bit depth and data type combination" ); let y_plane = image.y_plane; let u_plane = image.u_plane; let v_plane = image.v_plane; let y_stride = image.y_stride; let u_stride = image.u_stride; let v_stride = image.v_stride; let height = image.height; let width = image.width; check_yuv_plane_preconditions(y_plane, PlaneDefinition::Y, y_stride, height)?; check_yuv_plane_preconditions(u_plane, PlaneDefinition::U, u_stride, height)?; check_yuv_plane_preconditions(v_plane, PlaneDefinition::V, v_stride, height)?; check_rgb_preconditions(rgba, width * CHANNELS, height)?; let max_value = (1 << BIT_DEPTH) - 1; let rgb_stride = width * CHANNELS; let y_iter = y_plane.chunks_exact(y_stride); let rgb_iter = rgba.chunks_exact_mut(rgb_stride); let u_iter = u_plane.chunks_exact(u_stride); let v_iter = v_plane.chunks_exact(v_stride); match yuv_range { YuvIntensityRange::Tv => { const PRECISION: i32 = 11; // All channels on identity should use Y range let range = yuv_range.get_yuv_range(BIT_DEPTH as u32); let range_rgba = (1 << BIT_DEPTH) - 1; let y_coef = ((range_rgba as f32 / range.range_y as f32) * (1 << PRECISION) as f32) as i32; let y_bias = range.bias_y as i32; for (((y_src, u_src), v_src), rgb) in y_iter.zip(u_iter).zip(v_iter).zip(rgb_iter) { let rgb_chunks = rgb.chunks_exact_mut(CHANNELS); for (((&y_src, &u_src), &v_src), rgb_dst) in y_src.iter().zip(u_src).zip(v_src).zip(rgb_chunks) { rgb_dst[0] = qrshr::((v_src.as_() - y_bias) * y_coef).as_(); rgb_dst[1] = qrshr::((y_src.as_() - y_bias) * y_coef).as_(); rgb_dst[2] = qrshr::((u_src.as_() - y_bias) * y_coef).as_(); if CHANNELS == 4 { rgb_dst[3] = max_value.as_(); } } } } YuvIntensityRange::Pc => { for (((y_src, u_src), v_src), rgb) in y_iter.zip(u_iter).zip(v_iter).zip(rgb_iter) { let rgb_chunks = rgb.chunks_exact_mut(CHANNELS); for (((&y_src, &u_src), &v_src), rgb_dst) in y_src.iter().zip(u_src).zip(v_src).zip(rgb_chunks) { rgb_dst[0] = v_src; rgb_dst[1] = y_src; rgb_dst[2] = u_src; if CHANNELS == 4 { rgb_dst[3] = max_value.as_(); } } } } } Ok(()) } image-0.25.5/src/codecs/bmp/decoder.rs000064400000000000000000001450501046102023000155640ustar 00000000000000use std::cmp::{self, Ordering}; use std::io::{self, BufRead, Seek, SeekFrom}; use std::iter::{repeat, Rev}; use std::slice::ChunksMut; use std::{error, fmt}; use byteorder_lite::{LittleEndian, ReadBytesExt}; use crate::color::ColorType; use crate::error::{ DecodingError, ImageError, ImageResult, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{self, ImageDecoder, ImageFormat}; use crate::ImageDecoderRect; const BITMAPCOREHEADER_SIZE: u32 = 12; const BITMAPINFOHEADER_SIZE: u32 = 40; const BITMAPV2HEADER_SIZE: u32 = 52; const BITMAPV3HEADER_SIZE: u32 = 56; const BITMAPV4HEADER_SIZE: u32 = 108; const BITMAPV5HEADER_SIZE: u32 = 124; static LOOKUP_TABLE_3_BIT_TO_8_BIT: [u8; 8] = [0, 36, 73, 109, 146, 182, 219, 255]; static LOOKUP_TABLE_4_BIT_TO_8_BIT: [u8; 16] = [ 0, 17, 34, 51, 68, 85, 102, 119, 136, 153, 170, 187, 204, 221, 238, 255, ]; static LOOKUP_TABLE_5_BIT_TO_8_BIT: [u8; 32] = [ 0, 8, 16, 25, 33, 41, 49, 58, 66, 74, 82, 90, 99, 107, 115, 123, 132, 140, 148, 156, 165, 173, 181, 189, 197, 206, 214, 222, 230, 239, 247, 255, ]; static LOOKUP_TABLE_6_BIT_TO_8_BIT: [u8; 64] = [ 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 45, 49, 53, 57, 61, 65, 69, 73, 77, 81, 85, 89, 93, 97, 101, 105, 109, 113, 117, 121, 125, 130, 134, 138, 142, 146, 150, 154, 158, 162, 166, 170, 174, 178, 182, 186, 190, 194, 198, 202, 206, 210, 215, 219, 223, 227, 231, 235, 239, 243, 247, 251, 255, ]; static R5_G5_B5_COLOR_MASK: Bitfields = Bitfields { r: Bitfield { len: 5, shift: 10 }, g: Bitfield { len: 5, shift: 5 }, b: Bitfield { len: 5, shift: 0 }, a: Bitfield { len: 0, shift: 0 }, }; const R8_G8_B8_COLOR_MASK: Bitfields = Bitfields { r: Bitfield { len: 8, shift: 24 }, g: Bitfield { len: 8, shift: 16 }, b: Bitfield { len: 8, shift: 8 }, a: Bitfield { len: 0, shift: 0 }, }; const R8_G8_B8_A8_COLOR_MASK: Bitfields = Bitfields { r: Bitfield { len: 8, shift: 16 }, g: Bitfield { len: 8, shift: 8 }, b: Bitfield { len: 8, shift: 0 }, a: Bitfield { len: 8, shift: 24 }, }; const RLE_ESCAPE: u8 = 0; const RLE_ESCAPE_EOL: u8 = 0; const RLE_ESCAPE_EOF: u8 = 1; const RLE_ESCAPE_DELTA: u8 = 2; /// The maximum width/height the decoder will process. const MAX_WIDTH_HEIGHT: i32 = 0xFFFF; #[derive(PartialEq, Copy, Clone)] enum ImageType { Palette, RGB16, RGB24, RGB32, RGBA32, RLE8, RLE4, Bitfields16, Bitfields32, } #[derive(PartialEq)] enum BMPHeaderType { Core, Info, V2, V3, V4, V5, } #[derive(PartialEq)] enum FormatFullBytes { RGB24, RGB32, RGBA32, Format888, } enum Chunker<'a> { FromTop(ChunksMut<'a, u8>), FromBottom(Rev>), } pub(crate) struct RowIterator<'a> { chunks: Chunker<'a>, } impl<'a> Iterator for RowIterator<'a> { type Item = &'a mut [u8]; #[inline(always)] fn next(&mut self) -> Option<&'a mut [u8]> { match self.chunks { Chunker::FromTop(ref mut chunks) => chunks.next(), Chunker::FromBottom(ref mut chunks) => chunks.next(), } } } /// All errors that can occur when attempting to parse a BMP #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum DecoderError { // Failed to decompress RLE data. CorruptRleData, /// The bitfield mask interleaves set and unset bits BitfieldMaskNonContiguous, /// Bitfield mask invalid (e.g. too long for specified type) BitfieldMaskInvalid, /// Bitfield (of the specified width – 16- or 32-bit) mask not present BitfieldMaskMissing(u32), /// Bitfield (of the specified width – 16- or 32-bit) masks not present BitfieldMasksMissing(u32), /// BMP's "BM" signature wrong or missing BmpSignatureInvalid, /// More than the exactly one allowed plane specified by the format MoreThanOnePlane, /// Invalid amount of bits per channel for the specified image type InvalidChannelWidth(ChannelWidthError, u16), /// The width is negative NegativeWidth(i32), /// One of the dimensions is larger than a soft limit ImageTooLarge(i32, i32), /// The height is `i32::min_value()` /// /// General negative heights specify top-down DIBs InvalidHeight, /// Specified image type is invalid for top-down BMPs (i.e. is compressed) ImageTypeInvalidForTopDown(u32), /// Image type not currently recognized by the decoder ImageTypeUnknown(u32), /// Bitmap header smaller than the core header HeaderTooSmall(u32), /// The palette is bigger than allowed by the bit count of the BMP PaletteSizeExceeded { colors_used: u32, bit_count: u16, }, } impl fmt::Display for DecoderError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { DecoderError::CorruptRleData => f.write_str("Corrupt RLE data"), DecoderError::BitfieldMaskNonContiguous => f.write_str("Non-contiguous bitfield mask"), DecoderError::BitfieldMaskInvalid => f.write_str("Invalid bitfield mask"), DecoderError::BitfieldMaskMissing(bb) => { f.write_fmt(format_args!("Missing {bb}-bit bitfield mask")) } DecoderError::BitfieldMasksMissing(bb) => { f.write_fmt(format_args!("Missing {bb}-bit bitfield masks")) } DecoderError::BmpSignatureInvalid => f.write_str("BMP signature not found"), DecoderError::MoreThanOnePlane => f.write_str("More than one plane"), DecoderError::InvalidChannelWidth(tp, n) => { f.write_fmt(format_args!("Invalid channel bit count for {tp}: {n}")) } DecoderError::NegativeWidth(w) => f.write_fmt(format_args!("Negative width ({w})")), DecoderError::ImageTooLarge(w, h) => f.write_fmt(format_args!( "Image too large (one of ({w}, {h}) > soft limit of {MAX_WIDTH_HEIGHT})" )), DecoderError::InvalidHeight => f.write_str("Invalid height"), DecoderError::ImageTypeInvalidForTopDown(tp) => f.write_fmt(format_args!( "Invalid image type {tp} for top-down image." )), DecoderError::ImageTypeUnknown(tp) => { f.write_fmt(format_args!("Unknown image compression type {tp}")) } DecoderError::HeaderTooSmall(s) => { f.write_fmt(format_args!("Bitmap header too small ({s} bytes)")) } DecoderError::PaletteSizeExceeded { colors_used, bit_count, } => f.write_fmt(format_args!( "Palette size {colors_used} exceeds maximum size for BMP with bit count of {bit_count}" )), } } } impl From for ImageError { fn from(e: DecoderError) -> ImageError { ImageError::Decoding(DecodingError::new(ImageFormat::Bmp.into(), e)) } } impl error::Error for DecoderError {} /// Distinct image types whose saved channel width can be invalid #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum ChannelWidthError { /// RGB Rgb, /// 8-bit run length encoding Rle8, /// 4-bit run length encoding Rle4, /// Bitfields (16- or 32-bit) Bitfields, } impl fmt::Display for ChannelWidthError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(match self { ChannelWidthError::Rgb => "RGB", ChannelWidthError::Rle8 => "RLE8", ChannelWidthError::Rle4 => "RLE4", ChannelWidthError::Bitfields => "bitfields", }) } } /// Convenience function to check if the combination of width, length and number of /// channels would result in a buffer that would overflow. fn check_for_overflow(width: i32, length: i32, channels: usize) -> ImageResult<()> { num_bytes(width, length, channels) .map(|_| ()) .ok_or_else(|| { ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormat::Bmp.into(), UnsupportedErrorKind::GenericFeature(format!( "Image dimensions ({width}x{length} w/{channels} channels) are too large" )), )) }) } /// Calculate how many many bytes a buffer holding a decoded image with these properties would /// require. Returns `None` if the buffer size would overflow or if one of the sizes are negative. fn num_bytes(width: i32, length: i32, channels: usize) -> Option { if width <= 0 || length <= 0 { None } else { match channels.checked_mul(width as usize) { Some(n) => n.checked_mul(length as usize), None => None, } } } /// Call the provided function on each row of the provided buffer, returning Err if the provided /// function returns an error, extends the buffer if it's not large enough. fn with_rows( buffer: &mut [u8], width: i32, height: i32, channels: usize, top_down: bool, mut func: F, ) -> io::Result<()> where F: FnMut(&mut [u8]) -> io::Result<()>, { // An overflow should already have been checked for when this is called, // though we check anyhow, as it somehow seems to increase performance slightly. let row_width = channels.checked_mul(width as usize).unwrap(); let full_image_size = row_width.checked_mul(height as usize).unwrap(); assert_eq!(buffer.len(), full_image_size); if !top_down { for row in buffer.chunks_mut(row_width).rev() { func(row)?; } } else { for row in buffer.chunks_mut(row_width) { func(row)?; } } Ok(()) } fn set_8bit_pixel_run<'a, T: Iterator>( pixel_iter: &mut ChunksMut, palette: &[[u8; 3]], indices: T, n_pixels: usize, ) -> bool { for idx in indices.take(n_pixels) { if let Some(pixel) = pixel_iter.next() { let rgb = palette[*idx as usize]; pixel[0] = rgb[0]; pixel[1] = rgb[1]; pixel[2] = rgb[2]; } else { return false; } } true } fn set_4bit_pixel_run<'a, T: Iterator>( pixel_iter: &mut ChunksMut, palette: &[[u8; 3]], indices: T, mut n_pixels: usize, ) -> bool { for idx in indices { macro_rules! set_pixel { ($i:expr) => { if n_pixels == 0 { break; } if let Some(pixel) = pixel_iter.next() { let rgb = palette[$i as usize]; pixel[0] = rgb[0]; pixel[1] = rgb[1]; pixel[2] = rgb[2]; } else { return false; } n_pixels -= 1; }; } set_pixel!(idx >> 4); set_pixel!(idx & 0xf); } true } #[rustfmt::skip] fn set_2bit_pixel_run<'a, T: Iterator>( pixel_iter: &mut ChunksMut, palette: &[[u8; 3]], indices: T, mut n_pixels: usize, ) -> bool { for idx in indices { macro_rules! set_pixel { ($i:expr) => { if n_pixels == 0 { break; } if let Some(pixel) = pixel_iter.next() { let rgb = palette[$i as usize]; pixel[0] = rgb[0]; pixel[1] = rgb[1]; pixel[2] = rgb[2]; } else { return false; } n_pixels -= 1; }; } set_pixel!((idx >> 6) & 0x3u8); set_pixel!((idx >> 4) & 0x3u8); set_pixel!((idx >> 2) & 0x3u8); set_pixel!( idx & 0x3u8); } true } fn set_1bit_pixel_run<'a, T: Iterator>( pixel_iter: &mut ChunksMut, palette: &[[u8; 3]], indices: T, ) { for idx in indices { let mut bit = 0x80; loop { if let Some(pixel) = pixel_iter.next() { let rgb = palette[((idx & bit) != 0) as usize]; pixel[0] = rgb[0]; pixel[1] = rgb[1]; pixel[2] = rgb[2]; } else { return; } bit >>= 1; if bit == 0 { break; } } } } #[derive(PartialEq, Eq)] struct Bitfield { shift: u32, len: u32, } impl Bitfield { fn from_mask(mask: u32, max_len: u32) -> ImageResult { if mask == 0 { return Ok(Bitfield { shift: 0, len: 0 }); } let mut shift = mask.trailing_zeros(); let mut len = (!(mask >> shift)).trailing_zeros(); if len != mask.count_ones() { return Err(DecoderError::BitfieldMaskNonContiguous.into()); } if len + shift > max_len { return Err(DecoderError::BitfieldMaskInvalid.into()); } if len > 8 { shift += len - 8; len = 8; } Ok(Bitfield { shift, len }) } fn read(&self, data: u32) -> u8 { let data = data >> self.shift; match self.len { 1 => ((data & 0b1) * 0xff) as u8, 2 => ((data & 0b11) * 0x55) as u8, 3 => LOOKUP_TABLE_3_BIT_TO_8_BIT[(data & 0b00_0111) as usize], 4 => LOOKUP_TABLE_4_BIT_TO_8_BIT[(data & 0b00_1111) as usize], 5 => LOOKUP_TABLE_5_BIT_TO_8_BIT[(data & 0b01_1111) as usize], 6 => LOOKUP_TABLE_6_BIT_TO_8_BIT[(data & 0b11_1111) as usize], 7 => ((data & 0x7f) << 1 | (data & 0x7f) >> 6) as u8, 8 => (data & 0xff) as u8, _ => panic!(), } } } #[derive(PartialEq, Eq)] struct Bitfields { r: Bitfield, g: Bitfield, b: Bitfield, a: Bitfield, } impl Bitfields { fn from_mask( r_mask: u32, g_mask: u32, b_mask: u32, a_mask: u32, max_len: u32, ) -> ImageResult { let bitfields = Bitfields { r: Bitfield::from_mask(r_mask, max_len)?, g: Bitfield::from_mask(g_mask, max_len)?, b: Bitfield::from_mask(b_mask, max_len)?, a: Bitfield::from_mask(a_mask, max_len)?, }; if bitfields.r.len == 0 || bitfields.g.len == 0 || bitfields.b.len == 0 { return Err(DecoderError::BitfieldMaskMissing(max_len).into()); } Ok(bitfields) } } /// A bmp decoder pub struct BmpDecoder { reader: R, bmp_header_type: BMPHeaderType, indexed_color: bool, width: i32, height: i32, data_offset: u64, top_down: bool, no_file_header: bool, add_alpha_channel: bool, has_loaded_metadata: bool, image_type: ImageType, bit_count: u16, colors_used: u32, palette: Option>, bitfields: Option, } enum RLEInsn { EndOfFile, EndOfRow, Delta(u8, u8), Absolute(u8, Vec), PixelRun(u8, u8), } impl BmpDecoder { fn new_decoder(reader: R) -> BmpDecoder { BmpDecoder { reader, bmp_header_type: BMPHeaderType::Info, indexed_color: false, width: 0, height: 0, data_offset: 0, top_down: false, no_file_header: false, add_alpha_channel: false, has_loaded_metadata: false, image_type: ImageType::Palette, bit_count: 0, colors_used: 0, palette: None, bitfields: None, } } /// Create a new decoder that decodes from the stream ```r``` pub fn new(reader: R) -> ImageResult> { let mut decoder = Self::new_decoder(reader); decoder.read_metadata()?; Ok(decoder) } /// Create a new decoder that decodes from the stream ```r``` without first /// reading a BITMAPFILEHEADER. This is useful for decoding the `CF_DIB` format /// directly from the Windows clipboard. pub fn new_without_file_header(reader: R) -> ImageResult> { let mut decoder = Self::new_decoder(reader); decoder.no_file_header = true; decoder.read_metadata()?; Ok(decoder) } #[cfg(feature = "ico")] pub(crate) fn new_with_ico_format(reader: R) -> ImageResult> { let mut decoder = Self::new_decoder(reader); decoder.read_metadata_in_ico_format()?; Ok(decoder) } /// If true, the palette in BMP does not apply to the image even if it is found. /// In other words, the output image is the indexed color. pub fn set_indexed_color(&mut self, indexed_color: bool) { self.indexed_color = indexed_color; } #[cfg(feature = "ico")] pub(crate) fn reader(&mut self) -> &mut R { &mut self.reader } fn read_file_header(&mut self) -> ImageResult<()> { if self.no_file_header { return Ok(()); } let mut signature = [0; 2]; self.reader.read_exact(&mut signature)?; if signature != b"BM"[..] { return Err(DecoderError::BmpSignatureInvalid.into()); } // The next 8 bytes represent file size, followed the 4 reserved bytes // We're not interesting these values self.reader.read_u32::()?; self.reader.read_u32::()?; self.data_offset = u64::from(self.reader.read_u32::()?); Ok(()) } /// Read BITMAPCOREHEADER /// /// returns Err if any of the values are invalid. fn read_bitmap_core_header(&mut self) -> ImageResult<()> { // As height/width values in BMP files with core headers are only 16 bits long, // they won't be larger than `MAX_WIDTH_HEIGHT`. self.width = i32::from(self.reader.read_u16::()?); self.height = i32::from(self.reader.read_u16::()?); check_for_overflow(self.width, self.height, self.num_channels())?; // Number of planes (format specifies that this should be 1). if self.reader.read_u16::()? != 1 { return Err(DecoderError::MoreThanOnePlane.into()); } self.bit_count = self.reader.read_u16::()?; self.image_type = match self.bit_count { 1 | 4 | 8 => ImageType::Palette, 24 => ImageType::RGB24, _ => { return Err(DecoderError::InvalidChannelWidth( ChannelWidthError::Rgb, self.bit_count, ) .into()) } }; Ok(()) } /// Read BITMAPINFOHEADER /// or BITMAPV{2|3|4|5}HEADER. /// /// returns Err if any of the values are invalid. fn read_bitmap_info_header(&mut self) -> ImageResult<()> { self.width = self.reader.read_i32::()?; self.height = self.reader.read_i32::()?; // Width can not be negative if self.width < 0 { return Err(DecoderError::NegativeWidth(self.width).into()); } else if self.width > MAX_WIDTH_HEIGHT || self.height > MAX_WIDTH_HEIGHT { // Limit very large image sizes to avoid OOM issues. Images with these sizes are // unlikely to be valid anyhow. return Err(DecoderError::ImageTooLarge(self.width, self.height).into()); } if self.height == i32::MIN { return Err(DecoderError::InvalidHeight.into()); } // A negative height indicates a top-down DIB. if self.height < 0 { self.height *= -1; self.top_down = true; } check_for_overflow(self.width, self.height, self.num_channels())?; // Number of planes (format specifies that this should be 1). if self.reader.read_u16::()? != 1 { return Err(DecoderError::MoreThanOnePlane.into()); } self.bit_count = self.reader.read_u16::()?; let image_type_u32 = self.reader.read_u32::()?; // Top-down dibs can not be compressed. if self.top_down && image_type_u32 != 0 && image_type_u32 != 3 { return Err(DecoderError::ImageTypeInvalidForTopDown(image_type_u32).into()); } self.image_type = match image_type_u32 { 0 => match self.bit_count { 1 | 2 | 4 | 8 => ImageType::Palette, 16 => ImageType::RGB16, 24 => ImageType::RGB24, 32 if self.add_alpha_channel => ImageType::RGBA32, 32 => ImageType::RGB32, _ => { return Err(DecoderError::InvalidChannelWidth( ChannelWidthError::Rgb, self.bit_count, ) .into()) } }, 1 => match self.bit_count { 8 => ImageType::RLE8, _ => { return Err(DecoderError::InvalidChannelWidth( ChannelWidthError::Rle8, self.bit_count, ) .into()) } }, 2 => match self.bit_count { 4 => ImageType::RLE4, _ => { return Err(DecoderError::InvalidChannelWidth( ChannelWidthError::Rle4, self.bit_count, ) .into()) } }, 3 => match self.bit_count { 16 => ImageType::Bitfields16, 32 => ImageType::Bitfields32, _ => { return Err(DecoderError::InvalidChannelWidth( ChannelWidthError::Bitfields, self.bit_count, ) .into()) } }, 4 => { // JPEG compression is not implemented yet. return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Bmp.into(), UnsupportedErrorKind::GenericFeature("JPEG compression".to_owned()), ), )); } 5 => { // PNG compression is not implemented yet. return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Bmp.into(), UnsupportedErrorKind::GenericFeature("PNG compression".to_owned()), ), )); } 11..=13 => { // CMYK types are not implemented yet. return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Bmp.into(), UnsupportedErrorKind::GenericFeature("CMYK format".to_owned()), ), )); } _ => { // Unknown compression type. return Err(DecoderError::ImageTypeUnknown(image_type_u32).into()); } }; // The next 12 bytes represent data array size in bytes, // followed the horizontal and vertical printing resolutions // We will calculate the pixel array size using width & height of image // We're not interesting the horz or vert printing resolutions self.reader.read_u32::()?; self.reader.read_u32::()?; self.reader.read_u32::()?; self.colors_used = self.reader.read_u32::()?; // The next 4 bytes represent number of "important" colors // We're not interested in this value, so we'll skip it self.reader.read_u32::()?; Ok(()) } fn read_bitmasks(&mut self) -> ImageResult<()> { let r_mask = self.reader.read_u32::()?; let g_mask = self.reader.read_u32::()?; let b_mask = self.reader.read_u32::()?; let a_mask = match self.bmp_header_type { BMPHeaderType::V3 | BMPHeaderType::V4 | BMPHeaderType::V5 => { self.reader.read_u32::()? } _ => 0, }; self.bitfields = match self.image_type { ImageType::Bitfields16 => { Some(Bitfields::from_mask(r_mask, g_mask, b_mask, a_mask, 16)?) } ImageType::Bitfields32 => { Some(Bitfields::from_mask(r_mask, g_mask, b_mask, a_mask, 32)?) } _ => None, }; if self.bitfields.is_some() && a_mask != 0 { self.add_alpha_channel = true; } Ok(()) } fn read_metadata(&mut self) -> ImageResult<()> { if !self.has_loaded_metadata { self.read_file_header()?; let bmp_header_offset = self.reader.stream_position()?; let bmp_header_size = self.reader.read_u32::()?; let bmp_header_end = bmp_header_offset + u64::from(bmp_header_size); self.bmp_header_type = match bmp_header_size { BITMAPCOREHEADER_SIZE => BMPHeaderType::Core, BITMAPINFOHEADER_SIZE => BMPHeaderType::Info, BITMAPV2HEADER_SIZE => BMPHeaderType::V2, BITMAPV3HEADER_SIZE => BMPHeaderType::V3, BITMAPV4HEADER_SIZE => BMPHeaderType::V4, BITMAPV5HEADER_SIZE => BMPHeaderType::V5, _ if bmp_header_size < BITMAPCOREHEADER_SIZE => { // Size of any valid header types won't be smaller than core header type. return Err(DecoderError::HeaderTooSmall(bmp_header_size).into()); } _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Bmp.into(), UnsupportedErrorKind::GenericFeature(format!( "Unknown bitmap header type (size={bmp_header_size})" )), ), )) } }; match self.bmp_header_type { BMPHeaderType::Core => { self.read_bitmap_core_header()?; } BMPHeaderType::Info | BMPHeaderType::V2 | BMPHeaderType::V3 | BMPHeaderType::V4 | BMPHeaderType::V5 => { self.read_bitmap_info_header()?; } }; match self.image_type { ImageType::Bitfields16 | ImageType::Bitfields32 => self.read_bitmasks()?, _ => {} }; self.reader.seek(SeekFrom::Start(bmp_header_end))?; match self.image_type { ImageType::Palette | ImageType::RLE4 | ImageType::RLE8 => self.read_palette()?, _ => {} }; if self.no_file_header { // Use the offset of the end of metadata instead of reading a BMP file header. self.data_offset = self.reader.stream_position()?; } self.has_loaded_metadata = true; } Ok(()) } #[cfg(feature = "ico")] #[doc(hidden)] pub fn read_metadata_in_ico_format(&mut self) -> ImageResult<()> { self.no_file_header = true; self.add_alpha_channel = true; self.read_metadata()?; // The height field in an ICO file is doubled to account for the AND mask // (whether or not an AND mask is actually present). self.height /= 2; Ok(()) } fn get_palette_size(&mut self) -> ImageResult { match self.colors_used { 0 => Ok(1 << self.bit_count), _ => { if self.colors_used > 1 << self.bit_count { return Err(DecoderError::PaletteSizeExceeded { colors_used: self.colors_used, bit_count: self.bit_count, } .into()); } Ok(self.colors_used as usize) } } } fn bytes_per_color(&self) -> usize { match self.bmp_header_type { BMPHeaderType::Core => 3, _ => 4, } } fn read_palette(&mut self) -> ImageResult<()> { const MAX_PALETTE_SIZE: usize = 256; // Palette indices are u8. let bytes_per_color = self.bytes_per_color(); let palette_size = self.get_palette_size()?; let max_length = MAX_PALETTE_SIZE * bytes_per_color; let length = palette_size * bytes_per_color; let mut buf = Vec::with_capacity(max_length); // Resize and read the palette entries to the buffer. // We limit the buffer to at most 256 colours to avoid any oom issues as // 8-bit images can't reference more than 256 indexes anyhow. buf.resize(cmp::min(length, max_length), 0); self.reader.by_ref().read_exact(&mut buf)?; // Allocate 256 entries even if palette_size is smaller, to prevent corrupt files from // causing an out-of-bounds array access. match length.cmp(&max_length) { Ordering::Greater => { self.reader .seek(SeekFrom::Current((length - max_length) as i64))?; } Ordering::Less => buf.resize(max_length, 0), Ordering::Equal => (), } let p: Vec<[u8; 3]> = (0..MAX_PALETTE_SIZE) .map(|i| { let b = buf[bytes_per_color * i]; let g = buf[bytes_per_color * i + 1]; let r = buf[bytes_per_color * i + 2]; [r, g, b] }) .collect(); self.palette = Some(p); Ok(()) } /// Get the palette that is embedded in the BMP image, if any. pub fn get_palette(&self) -> Option<&[[u8; 3]]> { self.palette.as_ref().map(|vec| &vec[..]) } fn num_channels(&self) -> usize { if self.indexed_color { 1 } else if self.add_alpha_channel { 4 } else { 3 } } fn rows<'a>(&self, pixel_data: &'a mut [u8]) -> RowIterator<'a> { let stride = self.width as usize * self.num_channels(); if self.top_down { RowIterator { chunks: Chunker::FromTop(pixel_data.chunks_mut(stride)), } } else { RowIterator { chunks: Chunker::FromBottom(pixel_data.chunks_mut(stride).rev()), } } } fn read_palettized_pixel_data(&mut self, buf: &mut [u8]) -> ImageResult<()> { let num_channels = self.num_channels(); let row_byte_length = ((i32::from(self.bit_count) * self.width + 31) / 32 * 4) as usize; let mut indices = vec![0; row_byte_length]; let palette = self.palette.as_ref().unwrap(); let bit_count = self.bit_count; let reader = &mut self.reader; let width = self.width as usize; let skip_palette = self.indexed_color; reader.seek(SeekFrom::Start(self.data_offset))?; if num_channels == 4 { buf.chunks_exact_mut(4).for_each(|c| c[3] = 0xFF); } with_rows( buf, self.width, self.height, num_channels, self.top_down, |row| { reader.read_exact(&mut indices)?; if skip_palette { row.clone_from_slice(&indices[0..width]); } else { let mut pixel_iter = row.chunks_mut(num_channels); match bit_count { 1 => { set_1bit_pixel_run(&mut pixel_iter, palette, indices.iter()); } 2 => { set_2bit_pixel_run(&mut pixel_iter, palette, indices.iter(), width); } 4 => { set_4bit_pixel_run(&mut pixel_iter, palette, indices.iter(), width); } 8 => { set_8bit_pixel_run(&mut pixel_iter, palette, indices.iter(), width); } _ => panic!(), }; } Ok(()) }, )?; Ok(()) } fn read_16_bit_pixel_data( &mut self, buf: &mut [u8], bitfields: Option<&Bitfields>, ) -> ImageResult<()> { let num_channels = self.num_channels(); let row_padding_len = self.width as usize % 2 * 2; let row_padding = &mut [0; 2][..row_padding_len]; let bitfields = match bitfields { Some(b) => b, None => self.bitfields.as_ref().unwrap(), }; let reader = &mut self.reader; reader.seek(SeekFrom::Start(self.data_offset))?; with_rows( buf, self.width, self.height, num_channels, self.top_down, |row| { for pixel in row.chunks_mut(num_channels) { let data = u32::from(reader.read_u16::()?); pixel[0] = bitfields.r.read(data); pixel[1] = bitfields.g.read(data); pixel[2] = bitfields.b.read(data); if num_channels == 4 { if bitfields.a.len != 0 { pixel[3] = bitfields.a.read(data); } else { pixel[3] = 0xFF; } } } reader.read_exact(row_padding) }, )?; Ok(()) } /// Read image data from a reader in 32-bit formats that use bitfields. fn read_32_bit_pixel_data(&mut self, buf: &mut [u8]) -> ImageResult<()> { let num_channels = self.num_channels(); let bitfields = self.bitfields.as_ref().unwrap(); let reader = &mut self.reader; reader.seek(SeekFrom::Start(self.data_offset))?; with_rows( buf, self.width, self.height, num_channels, self.top_down, |row| { for pixel in row.chunks_mut(num_channels) { let data = reader.read_u32::()?; pixel[0] = bitfields.r.read(data); pixel[1] = bitfields.g.read(data); pixel[2] = bitfields.b.read(data); if num_channels == 4 { if bitfields.a.len != 0 { pixel[3] = bitfields.a.read(data); } else { pixel[3] = 0xff; } } } Ok(()) }, )?; Ok(()) } /// Read image data from a reader where the colours are stored as 8-bit values (24 or 32-bit). fn read_full_byte_pixel_data( &mut self, buf: &mut [u8], format: &FormatFullBytes, ) -> ImageResult<()> { let num_channels = self.num_channels(); let row_padding_len = match *format { FormatFullBytes::RGB24 => (4 - (self.width as usize * 3) % 4) % 4, _ => 0, }; let row_padding = &mut [0; 4][..row_padding_len]; self.reader.seek(SeekFrom::Start(self.data_offset))?; let reader = &mut self.reader; with_rows( buf, self.width, self.height, num_channels, self.top_down, |row| { for pixel in row.chunks_mut(num_channels) { if *format == FormatFullBytes::Format888 { reader.read_u8()?; } // Read the colour values (b, g, r). // Reading 3 bytes and reversing them is significantly faster than reading one // at a time. reader.read_exact(&mut pixel[0..3])?; pixel[0..3].reverse(); if *format == FormatFullBytes::RGB32 { reader.read_u8()?; } // Read the alpha channel if present if *format == FormatFullBytes::RGBA32 { reader.read_exact(&mut pixel[3..4])?; } else if num_channels == 4 { pixel[3] = 0xFF; } } reader.read_exact(row_padding) }, )?; Ok(()) } fn read_rle_data(&mut self, buf: &mut [u8], image_type: ImageType) -> ImageResult<()> { // Seek to the start of the actual image data. self.reader.seek(SeekFrom::Start(self.data_offset))?; let num_channels = self.num_channels(); let p = self.palette.as_ref().unwrap(); // Handling deltas in the RLE scheme means that we need to manually // iterate through rows and pixels. Even if we didn't have to handle // deltas, we have to ensure that a single runlength doesn't straddle // two rows. let mut row_iter = self.rows(buf); while let Some(row) = row_iter.next() { let mut pixel_iter = row.chunks_mut(num_channels); let mut x = 0; loop { let instruction = { let control_byte = self.reader.read_u8()?; match control_byte { RLE_ESCAPE => { let op = self.reader.read_u8()?; match op { RLE_ESCAPE_EOL => RLEInsn::EndOfRow, RLE_ESCAPE_EOF => RLEInsn::EndOfFile, RLE_ESCAPE_DELTA => { let xdelta = self.reader.read_u8()?; let ydelta = self.reader.read_u8()?; RLEInsn::Delta(xdelta, ydelta) } _ => { let mut length = op as usize; if self.image_type == ImageType::RLE4 { length = (length + 1) / 2; } length += length & 1; let mut buffer = vec![0; length]; self.reader.read_exact(&mut buffer)?; RLEInsn::Absolute(op, buffer) } } } _ => { let palette_index = self.reader.read_u8()?; RLEInsn::PixelRun(control_byte, palette_index) } } }; match instruction { RLEInsn::EndOfFile => { pixel_iter.for_each(|p| p.fill(0)); row_iter.for_each(|r| r.fill(0)); return Ok(()); } RLEInsn::EndOfRow => { pixel_iter.for_each(|p| p.fill(0)); break; } RLEInsn::Delta(x_delta, y_delta) => { // The msdn site on bitmap compression doesn't specify // what happens to the values skipped when encountering // a delta code, however IE and the windows image // preview seems to replace them with black pixels, // so we stick to that. if y_delta > 0 { // Zero out the remainder of the current row. pixel_iter.for_each(|p| p.fill(0)); // If any full rows are skipped, zero them out. for _ in 1..y_delta { let row = row_iter.next().ok_or(DecoderError::CorruptRleData)?; row.fill(0); } // Set the pixel iterator to the start of the next row. pixel_iter = row_iter .next() .ok_or(DecoderError::CorruptRleData)? .chunks_mut(num_channels); // Zero out the pixels up to the current point in the row. for _ in 0..x { pixel_iter .next() .ok_or(DecoderError::CorruptRleData)? .fill(0); } } for _ in 0..x_delta { let pixel = pixel_iter.next().ok_or(DecoderError::CorruptRleData)?; pixel.fill(0); } x += x_delta as usize; } RLEInsn::Absolute(length, indices) => { // Absolute mode cannot span rows, so if we run // out of pixels to process, we should stop // processing the image. match image_type { ImageType::RLE8 => { if !set_8bit_pixel_run( &mut pixel_iter, p, indices.iter(), length as usize, ) { return Err(DecoderError::CorruptRleData.into()); } } ImageType::RLE4 => { if !set_4bit_pixel_run( &mut pixel_iter, p, indices.iter(), length as usize, ) { return Err(DecoderError::CorruptRleData.into()); } } _ => unreachable!(), } x += length as usize; } RLEInsn::PixelRun(n_pixels, palette_index) => { // A pixel run isn't allowed to span rows, but we // simply continue on to the next row if we run // out of pixels to set. match image_type { ImageType::RLE8 => { if !set_8bit_pixel_run( &mut pixel_iter, p, repeat(&palette_index), n_pixels as usize, ) { return Err(DecoderError::CorruptRleData.into()); } } ImageType::RLE4 => { if !set_4bit_pixel_run( &mut pixel_iter, p, repeat(&palette_index), n_pixels as usize, ) { return Err(DecoderError::CorruptRleData.into()); } } _ => unreachable!(), } x += n_pixels as usize; } } } } Ok(()) } /// Read the actual data of the image. This function is deliberately not public because it /// cannot be called multiple times without seeking back the underlying reader in between. pub(crate) fn read_image_data(&mut self, buf: &mut [u8]) -> ImageResult<()> { match self.image_type { ImageType::Palette => self.read_palettized_pixel_data(buf), ImageType::RGB16 => self.read_16_bit_pixel_data(buf, Some(&R5_G5_B5_COLOR_MASK)), ImageType::RGB24 => self.read_full_byte_pixel_data(buf, &FormatFullBytes::RGB24), ImageType::RGB32 => self.read_full_byte_pixel_data(buf, &FormatFullBytes::RGB32), ImageType::RGBA32 => self.read_full_byte_pixel_data(buf, &FormatFullBytes::RGBA32), ImageType::RLE8 => self.read_rle_data(buf, ImageType::RLE8), ImageType::RLE4 => self.read_rle_data(buf, ImageType::RLE4), ImageType::Bitfields16 => match self.bitfields { Some(_) => self.read_16_bit_pixel_data(buf, None), None => Err(DecoderError::BitfieldMasksMissing(16).into()), }, ImageType::Bitfields32 => match self.bitfields { Some(R8_G8_B8_COLOR_MASK) => { self.read_full_byte_pixel_data(buf, &FormatFullBytes::Format888) } Some(R8_G8_B8_A8_COLOR_MASK) => { self.read_full_byte_pixel_data(buf, &FormatFullBytes::RGBA32) } Some(_) => self.read_32_bit_pixel_data(buf), None => Err(DecoderError::BitfieldMasksMissing(32).into()), }, } } } impl ImageDecoder for BmpDecoder { fn dimensions(&self) -> (u32, u32) { (self.width as u32, self.height as u32) } fn color_type(&self) -> ColorType { if self.indexed_color { ColorType::L8 } else if self.add_alpha_channel { ColorType::Rgba8 } else { ColorType::Rgb8 } } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); self.read_image_data(buf) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } impl ImageDecoderRect for BmpDecoder { fn read_rect( &mut self, x: u32, y: u32, width: u32, height: u32, buf: &mut [u8], row_pitch: usize, ) -> ImageResult<()> { let start = self.reader.stream_position()?; image::load_rect( x, y, width, height, buf, row_pitch, self, self.total_bytes() as usize, |_, _| Ok(()), |s, buf| s.read_image_data(buf), )?; self.reader.seek(SeekFrom::Start(start))?; Ok(()) } } #[cfg(test)] mod test { use std::io::{BufReader, Cursor}; use super::*; #[test] fn test_bitfield_len() { for len in 1..9 { let bitfield = Bitfield { shift: 0, len }; for i in 0..(1 << len) { let read = bitfield.read(i); let calc = (i as f64 / ((1 << len) - 1) as f64 * 255f64).round() as u8; if read != calc { println!("len:{} i:{} read:{} calc:{}", len, i, read, calc); } assert_eq!(read, calc); } } } #[test] fn read_rect() { let f = BufReader::new(std::fs::File::open("tests/images/bmp/images/Core_8_Bit.bmp").unwrap()); let mut decoder = BmpDecoder::new(f).unwrap(); let mut buf: Vec = vec![0; 8 * 8 * 3]; decoder.read_rect(0, 0, 8, 8, &mut buf, 8 * 3).unwrap(); } #[test] fn read_rle_too_short() { let data = vec![ 0x42, 0x4d, 0x04, 0xee, 0xfe, 0xff, 0xff, 0x10, 0xff, 0x00, 0x04, 0x00, 0x00, 0x00, 0x7c, 0x00, 0x00, 0x00, 0x0c, 0x41, 0x00, 0x00, 0x07, 0x10, 0x00, 0x00, 0x01, 0x00, 0x04, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x21, 0xff, 0x00, 0x66, 0x61, 0x72, 0x62, 0x66, 0x65, 0x6c, 0x64, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xd8, 0xff, 0x00, 0x00, 0x19, 0x51, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0xfa, 0xff, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x11, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x00, 0x00, 0x00, 0x00, 0x2d, 0x31, 0x31, 0x35, 0x36, 0x00, 0xff, 0x00, 0x00, 0x52, 0x3a, 0x37, 0x30, 0x7e, 0x71, 0x63, 0x91, 0x5a, 0x04, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2d, 0x35, 0x37, 0x00, 0xff, 0x00, 0x00, 0x52, 0x3a, 0x37, 0x30, 0x7e, 0x71, 0x63, 0x91, 0x5a, 0x04, 0x05, 0x3c, 0x00, 0x00, 0x11, 0x00, 0x5d, 0x7a, 0x82, 0xb7, 0xca, 0x2d, 0x31, 0xff, 0xff, 0xc7, 0x95, 0x33, 0x2e, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x7c, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x66, 0x00, 0x4d, 0x4d, 0x00, 0x2a, 0x00, ]; let decoder = BmpDecoder::new(Cursor::new(&data)).unwrap(); let mut buf = vec![0; usize::try_from(decoder.total_bytes()).unwrap()]; assert!(decoder.read_image(&mut buf).is_ok()); } #[test] fn test_no_header() { let tests = [ "Info_R8_G8_B8.bmp", "Info_A8_R8_G8_B8.bmp", "Info_8_Bit.bmp", "Info_4_Bit.bmp", "Info_1_Bit.bmp", ]; for name in &tests { let path = format!("tests/images/bmp/images/{name}"); let ref_img = crate::open(&path).unwrap(); let mut data = std::fs::read(&path).unwrap(); // skip the BITMAPFILEHEADER let slice = &mut data[14..]; let decoder = BmpDecoder::new_without_file_header(Cursor::new(slice)).unwrap(); let no_hdr_img = crate::DynamicImage::from_decoder(decoder).unwrap(); assert_eq!(ref_img, no_hdr_img); } } } image-0.25.5/src/codecs/bmp/encoder.rs000064400000000000000000000340061046102023000155740ustar 00000000000000use byteorder_lite::{LittleEndian, WriteBytesExt}; use std::io::{self, Write}; use crate::error::{ EncodingError, ImageError, ImageFormatHint, ImageResult, ParameterError, ParameterErrorKind, }; use crate::image::ImageEncoder; use crate::{ExtendedColorType, ImageFormat}; const BITMAPFILEHEADER_SIZE: u32 = 14; const BITMAPINFOHEADER_SIZE: u32 = 40; const BITMAPV4HEADER_SIZE: u32 = 108; /// The representation of a BMP encoder. pub struct BmpEncoder<'a, W: 'a> { writer: &'a mut W, } impl<'a, W: Write + 'a> BmpEncoder<'a, W> { /// Create a new encoder that writes its output to ```w```. pub fn new(w: &'a mut W) -> Self { BmpEncoder { writer: w } } /// Encodes the image `image` that has dimensions `width` and `height` and `ExtendedColorType` `c`. /// /// # Panics /// /// Panics if `width * height * c.bytes_per_pixel() != image.len()`. #[track_caller] pub fn encode( &mut self, image: &[u8], width: u32, height: u32, c: ExtendedColorType, ) -> ImageResult<()> { self.encode_with_palette(image, width, height, c, None) } /// Same as `encode`, but allow a palette to be passed in. The `palette` is ignored for color /// types other than Luma/Luma-with-alpha. /// /// # Panics /// /// Panics if `width * height * c.bytes_per_pixel() != image.len()`. #[track_caller] pub fn encode_with_palette( &mut self, image: &[u8], width: u32, height: u32, c: ExtendedColorType, palette: Option<&[[u8; 3]]>, ) -> ImageResult<()> { if palette.is_some() && c != ExtendedColorType::L8 && c != ExtendedColorType::La8 { return Err(ImageError::IoError(io::Error::new( io::ErrorKind::InvalidInput, format!( "Unsupported color type {c:?} when using a non-empty palette. Supported types: Gray(8), GrayA(8)." ), ))); } let expected_buffer_len = c.buffer_size(width, height); assert_eq!( expected_buffer_len, image.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", image.len(), ); let bmp_header_size = BITMAPFILEHEADER_SIZE; let (dib_header_size, written_pixel_size, palette_color_count) = get_pixel_info(c, palette)?; let row_pad_size = (4 - (width * written_pixel_size) % 4) % 4; // each row must be padded to a multiple of 4 bytes let image_size = width .checked_mul(height) .and_then(|v| v.checked_mul(written_pixel_size)) .and_then(|v| v.checked_add(height * row_pad_size)) .ok_or_else(|| { ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, )) })?; let palette_size = palette_color_count * 4; // all palette colors are BGRA let file_size = bmp_header_size .checked_add(dib_header_size) .and_then(|v| v.checked_add(palette_size)) .and_then(|v| v.checked_add(image_size)) .ok_or_else(|| { ImageError::Encoding(EncodingError::new( ImageFormatHint::Exact(ImageFormat::Bmp), "calculated BMP header size larger than 2^32", )) })?; // write BMP header self.writer.write_u8(b'B')?; self.writer.write_u8(b'M')?; self.writer.write_u32::(file_size)?; // file size self.writer.write_u16::(0)?; // reserved 1 self.writer.write_u16::(0)?; // reserved 2 self.writer .write_u32::(bmp_header_size + dib_header_size + palette_size)?; // image data offset // write DIB header self.writer.write_u32::(dib_header_size)?; self.writer.write_i32::(width as i32)?; self.writer.write_i32::(height as i32)?; self.writer.write_u16::(1)?; // color planes self.writer .write_u16::((written_pixel_size * 8) as u16)?; // bits per pixel if dib_header_size >= BITMAPV4HEADER_SIZE { // Assume BGRA32 self.writer.write_u32::(3)?; // compression method - bitfields } else { self.writer.write_u32::(0)?; // compression method - no compression } self.writer.write_u32::(image_size)?; self.writer.write_i32::(0)?; // horizontal ppm self.writer.write_i32::(0)?; // vertical ppm self.writer.write_u32::(palette_color_count)?; self.writer.write_u32::(0)?; // all colors are important if dib_header_size >= BITMAPV4HEADER_SIZE { // Assume BGRA32 self.writer.write_u32::(0xff << 16)?; // red mask self.writer.write_u32::(0xff << 8)?; // green mask self.writer.write_u32::(0xff)?; // blue mask self.writer.write_u32::(0xff << 24)?; // alpha mask self.writer.write_u32::(0x7352_4742)?; // colorspace - sRGB // endpoints (3x3) and gamma (3) for _ in 0..12 { self.writer.write_u32::(0)?; } } // write image data match c { ExtendedColorType::Rgb8 => self.encode_rgb(image, width, height, row_pad_size, 3)?, ExtendedColorType::Rgba8 => self.encode_rgba(image, width, height, row_pad_size, 4)?, ExtendedColorType::L8 => { self.encode_gray(image, width, height, row_pad_size, 1, palette)?; } ExtendedColorType::La8 => { self.encode_gray(image, width, height, row_pad_size, 2, palette)?; } _ => { return Err(ImageError::IoError(io::Error::new( io::ErrorKind::InvalidInput, &get_unsupported_error_message(c)[..], ))) } } Ok(()) } fn encode_rgb( &mut self, image: &[u8], width: u32, height: u32, row_pad_size: u32, bytes_per_pixel: u32, ) -> io::Result<()> { let width = width as usize; let height = height as usize; let x_stride = bytes_per_pixel as usize; let y_stride = width * x_stride; for row in (0..height).rev() { // from the bottom up let row_start = row * y_stride; for px in image[row_start..][..y_stride].chunks_exact(x_stride) { let r = px[0]; let g = px[1]; let b = px[2]; // written as BGR self.writer.write_all(&[b, g, r])?; } self.write_row_pad(row_pad_size)?; } Ok(()) } fn encode_rgba( &mut self, image: &[u8], width: u32, height: u32, row_pad_size: u32, bytes_per_pixel: u32, ) -> io::Result<()> { let width = width as usize; let height = height as usize; let x_stride = bytes_per_pixel as usize; let y_stride = width * x_stride; for row in (0..height).rev() { // from the bottom up let row_start = row * y_stride; for px in image[row_start..][..y_stride].chunks_exact(x_stride) { let r = px[0]; let g = px[1]; let b = px[2]; let a = px[3]; // written as BGRA self.writer.write_all(&[b, g, r, a])?; } self.write_row_pad(row_pad_size)?; } Ok(()) } fn encode_gray( &mut self, image: &[u8], width: u32, height: u32, row_pad_size: u32, bytes_per_pixel: u32, palette: Option<&[[u8; 3]]>, ) -> io::Result<()> { // write grayscale palette if let Some(palette) = palette { for item in palette { // each color is written as BGRA, where A is always 0 self.writer.write_all(&[item[2], item[1], item[0], 0])?; } } else { for val in 0u8..=255 { // each color is written as BGRA, where A is always 0 and since only grayscale is being written, B = G = R = index self.writer.write_all(&[val, val, val, 0])?; } } // write image data let x_stride = bytes_per_pixel; let y_stride = width * x_stride; for row in (0..height).rev() { // from the bottom up let row_start = row * y_stride; // color value is equal to the palette index if x_stride == 1 { // improve performance by writing the whole row at once self.writer .write_all(&image[row_start as usize..][..y_stride as usize])?; } else { for col in 0..width { let pixel_start = (row_start + (col * x_stride)) as usize; self.writer.write_u8(image[pixel_start])?; // alpha is never written as it's not widely supported } } self.write_row_pad(row_pad_size)?; } Ok(()) } fn write_row_pad(&mut self, row_pad_size: u32) -> io::Result<()> { for _ in 0..row_pad_size { self.writer.write_u8(0)?; } Ok(()) } } impl ImageEncoder for BmpEncoder<'_, W> { #[track_caller] fn write_image( mut self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { self.encode(buf, width, height, color_type) } } fn get_unsupported_error_message(c: ExtendedColorType) -> String { format!("Unsupported color type {c:?}. Supported types: RGB(8), RGBA(8), Gray(8), GrayA(8).") } /// Returns a tuple representing: (dib header size, written pixel size, palette color count). fn get_pixel_info( c: ExtendedColorType, palette: Option<&[[u8; 3]]>, ) -> io::Result<(u32, u32, u32)> { let sizes = match c { ExtendedColorType::Rgb8 => (BITMAPINFOHEADER_SIZE, 3, 0), ExtendedColorType::Rgba8 => (BITMAPV4HEADER_SIZE, 4, 0), ExtendedColorType::L8 => ( BITMAPINFOHEADER_SIZE, 1, palette.map(|p| p.len()).unwrap_or(256) as u32, ), ExtendedColorType::La8 => ( BITMAPINFOHEADER_SIZE, 1, palette.map(|p| p.len()).unwrap_or(256) as u32, ), _ => { return Err(io::Error::new( io::ErrorKind::InvalidInput, &get_unsupported_error_message(c)[..], )) } }; Ok(sizes) } #[cfg(test)] mod tests { use super::super::BmpDecoder; use super::BmpEncoder; use crate::image::ImageDecoder; use crate::ExtendedColorType; use std::io::Cursor; fn round_trip_image(image: &[u8], width: u32, height: u32, c: ExtendedColorType) -> Vec { let mut encoded_data = Vec::new(); { let mut encoder = BmpEncoder::new(&mut encoded_data); encoder .encode(image, width, height, c) .expect("could not encode image"); } let decoder = BmpDecoder::new(Cursor::new(&encoded_data)).expect("failed to decode"); let mut buf = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut buf).expect("failed to decode"); buf } #[test] fn round_trip_single_pixel_rgb() { let image = [255u8, 0, 0]; // single red pixel let decoded = round_trip_image(&image, 1, 1, ExtendedColorType::Rgb8); assert_eq!(3, decoded.len()); assert_eq!(255, decoded[0]); assert_eq!(0, decoded[1]); assert_eq!(0, decoded[2]); } #[test] #[cfg(target_pointer_width = "64")] fn huge_files_return_error() { let mut encoded_data = Vec::new(); let image = vec![0u8; 3 * 40_000 * 40_000]; // 40_000x40_000 pixels, 3 bytes per pixel, allocated on the heap let mut encoder = BmpEncoder::new(&mut encoded_data); let result = encoder.encode(&image, 40_000, 40_000, ExtendedColorType::Rgb8); assert!(result.is_err()); } #[test] fn round_trip_single_pixel_rgba() { let image = [1, 2, 3, 4]; let decoded = round_trip_image(&image, 1, 1, ExtendedColorType::Rgba8); assert_eq!(&decoded[..], &image[..]); } #[test] fn round_trip_3px_rgb() { let image = [0u8; 3 * 3 * 3]; // 3x3 pixels, 3 bytes per pixel let _decoded = round_trip_image(&image, 3, 3, ExtendedColorType::Rgb8); } #[test] fn round_trip_gray() { let image = [0u8, 1, 2]; // 3 pixels let decoded = round_trip_image(&image, 3, 1, ExtendedColorType::L8); // should be read back as 3 RGB pixels assert_eq!(9, decoded.len()); assert_eq!(0, decoded[0]); assert_eq!(0, decoded[1]); assert_eq!(0, decoded[2]); assert_eq!(1, decoded[3]); assert_eq!(1, decoded[4]); assert_eq!(1, decoded[5]); assert_eq!(2, decoded[6]); assert_eq!(2, decoded[7]); assert_eq!(2, decoded[8]); } #[test] fn round_trip_graya() { let image = [0u8, 0, 1, 0, 2, 0]; // 3 pixels, each with an alpha channel let decoded = round_trip_image(&image, 1, 3, ExtendedColorType::La8); // should be read back as 3 RGB pixels assert_eq!(9, decoded.len()); assert_eq!(0, decoded[0]); assert_eq!(0, decoded[1]); assert_eq!(0, decoded[2]); assert_eq!(1, decoded[3]); assert_eq!(1, decoded[4]); assert_eq!(1, decoded[5]); assert_eq!(2, decoded[6]); assert_eq!(2, decoded[7]); assert_eq!(2, decoded[8]); } } image-0.25.5/src/codecs/bmp/mod.rs000064400000000000000000000005731046102023000147360ustar 00000000000000//! Decoding and Encoding of BMP Images //! //! A decoder and encoder for BMP (Windows Bitmap) images //! //! # Related Links //! * //! * //! pub use self::decoder::BmpDecoder; pub use self::encoder::BmpEncoder; mod decoder; mod encoder; image-0.25.5/src/codecs/dds.rs000064400000000000000000000317561046102023000141620ustar 00000000000000//! Decoding of DDS images //! //! DDS (DirectDraw Surface) is a container format for storing DXT (S3TC) compressed images. //! //! # Related Links //! * - Description of the DDS format. use std::io::Read; use std::{error, fmt}; use byteorder_lite::{LittleEndian, ReadBytesExt}; #[allow(deprecated)] use crate::codecs::dxt::{DxtDecoder, DxtVariant}; use crate::color::ColorType; use crate::error::{ DecodingError, ImageError, ImageFormatHint, ImageResult, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageDecoder, ImageFormat}; /// Errors that can occur during decoding and parsing a DDS image #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] #[allow(clippy::enum_variant_names)] enum DecoderError { /// Wrong DDS channel width PixelFormatSizeInvalid(u32), /// Wrong DDS header size HeaderSizeInvalid(u32), /// Wrong DDS header flags HeaderFlagsInvalid(u32), /// Invalid DXGI format in DX10 header DxgiFormatInvalid(u32), /// Invalid resource dimension ResourceDimensionInvalid(u32), /// Invalid flags in DX10 header Dx10FlagsInvalid(u32), /// Invalid array size in DX10 header Dx10ArraySizeInvalid(u32), /// DDS "DDS " signature invalid or missing DdsSignatureInvalid, } impl fmt::Display for DecoderError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { DecoderError::PixelFormatSizeInvalid(s) => { f.write_fmt(format_args!("Invalid DDS PixelFormat size: {s}")) } DecoderError::HeaderSizeInvalid(s) => { f.write_fmt(format_args!("Invalid DDS header size: {s}")) } DecoderError::HeaderFlagsInvalid(fs) => { f.write_fmt(format_args!("Invalid DDS header flags: {fs:#010X}")) } DecoderError::DxgiFormatInvalid(df) => { f.write_fmt(format_args!("Invalid DDS DXGI format: {df}")) } DecoderError::ResourceDimensionInvalid(d) => { f.write_fmt(format_args!("Invalid DDS resource dimension: {d}")) } DecoderError::Dx10FlagsInvalid(fs) => { f.write_fmt(format_args!("Invalid DDS DX10 header flags: {fs:#010X}")) } DecoderError::Dx10ArraySizeInvalid(s) => { f.write_fmt(format_args!("Invalid DDS DX10 array size: {s}")) } DecoderError::DdsSignatureInvalid => f.write_str("DDS signature not found"), } } } impl From for ImageError { fn from(e: DecoderError) -> ImageError { ImageError::Decoding(DecodingError::new(ImageFormat::Dds.into(), e)) } } impl error::Error for DecoderError {} /// Header used by DDS image files #[derive(Debug)] struct Header { _flags: u32, height: u32, width: u32, _pitch_or_linear_size: u32, _depth: u32, _mipmap_count: u32, pixel_format: PixelFormat, _caps: u32, _caps2: u32, } /// Extended DX10 header used by some DDS image files #[derive(Debug)] struct DX10Header { dxgi_format: u32, resource_dimension: u32, misc_flag: u32, array_size: u32, misc_flags_2: u32, } /// DDS pixel format #[derive(Debug)] struct PixelFormat { flags: u32, fourcc: [u8; 4], _rgb_bit_count: u32, _r_bit_mask: u32, _g_bit_mask: u32, _b_bit_mask: u32, _a_bit_mask: u32, } impl PixelFormat { fn from_reader(r: &mut dyn Read) -> ImageResult { let size = r.read_u32::()?; if size != 32 { return Err(DecoderError::PixelFormatSizeInvalid(size).into()); } Ok(Self { flags: r.read_u32::()?, fourcc: { let mut v = [0; 4]; r.read_exact(&mut v)?; v }, _rgb_bit_count: r.read_u32::()?, _r_bit_mask: r.read_u32::()?, _g_bit_mask: r.read_u32::()?, _b_bit_mask: r.read_u32::()?, _a_bit_mask: r.read_u32::()?, }) } } impl Header { fn from_reader(r: &mut dyn Read) -> ImageResult { let size = r.read_u32::()?; if size != 124 { return Err(DecoderError::HeaderSizeInvalid(size).into()); } const REQUIRED_FLAGS: u32 = 0x1 | 0x2 | 0x4 | 0x1000; const VALID_FLAGS: u32 = 0x1 | 0x2 | 0x4 | 0x8 | 0x1000 | 0x20000 | 0x80000 | 0x0080_0000; let flags = r.read_u32::()?; if flags & (REQUIRED_FLAGS | !VALID_FLAGS) != REQUIRED_FLAGS { return Err(DecoderError::HeaderFlagsInvalid(flags).into()); } let height = r.read_u32::()?; let width = r.read_u32::()?; let pitch_or_linear_size = r.read_u32::()?; let depth = r.read_u32::()?; let mipmap_count = r.read_u32::()?; // Skip `dwReserved1` { let mut skipped = [0; 4 * 11]; r.read_exact(&mut skipped)?; } let pixel_format = PixelFormat::from_reader(r)?; let caps = r.read_u32::()?; let caps2 = r.read_u32::()?; // Skip `dwCaps3`, `dwCaps4`, `dwReserved2` (unused) { let mut skipped = [0; 4 + 4 + 4]; r.read_exact(&mut skipped)?; } Ok(Self { _flags: flags, height, width, _pitch_or_linear_size: pitch_or_linear_size, _depth: depth, _mipmap_count: mipmap_count, pixel_format, _caps: caps, _caps2: caps2, }) } } impl DX10Header { fn from_reader(r: &mut dyn Read) -> ImageResult { let dxgi_format = r.read_u32::()?; let resource_dimension = r.read_u32::()?; let misc_flag = r.read_u32::()?; let array_size = r.read_u32::()?; let misc_flags_2 = r.read_u32::()?; let dx10_header = Self { dxgi_format, resource_dimension, misc_flag, array_size, misc_flags_2, }; dx10_header.validate()?; Ok(dx10_header) } fn validate(&self) -> Result<(), ImageError> { // Note: see https://docs.microsoft.com/en-us/windows/win32/direct3ddds/dds-header-dxt10 for info on valid values if self.dxgi_format > 132 { // Invalid format return Err(DecoderError::DxgiFormatInvalid(self.dxgi_format).into()); } if self.resource_dimension < 2 || self.resource_dimension > 4 { // Invalid dimension // Only 1D (2), 2D (3) and 3D (4) resource dimensions are allowed return Err(DecoderError::ResourceDimensionInvalid(self.resource_dimension).into()); } if self.misc_flag != 0x0 && self.misc_flag != 0x4 { // Invalid flag // Only no (0x0) and DDS_RESOURCE_MISC_TEXTURECUBE (0x4) flags are allowed return Err(DecoderError::Dx10FlagsInvalid(self.misc_flag).into()); } if self.resource_dimension == 4 && self.array_size != 1 { // Invalid array size // 3D textures (resource dimension == 4) must have an array size of 1 return Err(DecoderError::Dx10ArraySizeInvalid(self.array_size).into()); } if self.misc_flags_2 > 0x4 { // Invalid alpha flags return Err(DecoderError::Dx10FlagsInvalid(self.misc_flags_2).into()); } Ok(()) } } /// The representation of a DDS decoder pub struct DdsDecoder { #[allow(deprecated)] inner: DxtDecoder, } impl DdsDecoder { /// Create a new decoder that decodes from the stream `r` pub fn new(mut r: R) -> ImageResult { let mut magic = [0; 4]; r.read_exact(&mut magic)?; if magic != b"DDS "[..] { return Err(DecoderError::DdsSignatureInvalid.into()); } let header = Header::from_reader(&mut r)?; if header.pixel_format.flags & 0x4 != 0 { #[allow(deprecated)] let variant = match &header.pixel_format.fourcc { b"DXT1" => DxtVariant::DXT1, b"DXT3" => DxtVariant::DXT3, b"DXT5" => DxtVariant::DXT5, b"DX10" => { let dx10_header = DX10Header::from_reader(&mut r)?; // Format equivalents were taken from https://docs.microsoft.com/en-us/windows/win32/direct3d11/texture-block-compression-in-direct3d-11 // The enum integer values were taken from https://docs.microsoft.com/en-us/windows/win32/api/dxgiformat/ne-dxgiformat-dxgi_format // DXT1 represents the different BC1 variants, DTX3 represents the different BC2 variants and DTX5 represents the different BC3 variants match dx10_header.dxgi_format { 70..=72 => DxtVariant::DXT1, // DXGI_FORMAT_BC1_TYPELESS, DXGI_FORMAT_BC1_UNORM or DXGI_FORMAT_BC1_UNORM_SRGB 73..=75 => DxtVariant::DXT3, // DXGI_FORMAT_BC2_TYPELESS, DXGI_FORMAT_BC2_UNORM or DXGI_FORMAT_BC2_UNORM_SRGB 76..=78 => DxtVariant::DXT5, // DXGI_FORMAT_BC3_TYPELESS, DXGI_FORMAT_BC3_UNORM or DXGI_FORMAT_BC3_UNORM_SRGB _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Dds.into(), UnsupportedErrorKind::GenericFeature(format!( "DDS DXGI Format {}", dx10_header.dxgi_format )), ), )) } } } fourcc => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Dds.into(), UnsupportedErrorKind::GenericFeature(format!("DDS FourCC {fourcc:?}")), ), )) } }; #[allow(deprecated)] let bytes_per_pixel = variant.color_type().bytes_per_pixel(); if crate::utils::check_dimension_overflow(header.width, header.height, bytes_per_pixel) { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Dds.into(), UnsupportedErrorKind::GenericFeature(format!( "Image dimensions ({}x{}) are too large", header.width, header.height )), ), )); } #[allow(deprecated)] let inner = DxtDecoder::new(r, header.width, header.height, variant)?; Ok(Self { inner }) } else { // For now, supports only DXT variants Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Dds.into(), UnsupportedErrorKind::Format(ImageFormatHint::Name("DDS".to_string())), ), )) } } } impl ImageDecoder for DdsDecoder { fn dimensions(&self) -> (u32, u32) { self.inner.dimensions() } fn color_type(&self) -> ColorType { self.inner.color_type() } fn read_image(self, buf: &mut [u8]) -> ImageResult<()> { self.inner.read_image(buf) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } #[cfg(test)] mod test { use super::*; #[test] fn dimension_overflow() { // A DXT1 header set to 0xFFFF_FFFC width and height (the highest u32%4 == 0) let header = [ 0x44, 0x44, 0x53, 0x20, 0x7C, 0x0, 0x0, 0x0, 0x7, 0x10, 0x8, 0x0, 0xFC, 0xFF, 0xFF, 0xFF, 0xFC, 0xFF, 0xFF, 0xFF, 0x0, 0xC0, 0x12, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x49, 0x4D, 0x41, 0x47, 0x45, 0x4D, 0x41, 0x47, 0x49, 0x43, 0x4B, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x20, 0x0, 0x0, 0x0, 0x4, 0x0, 0x0, 0x0, 0x44, 0x58, 0x54, 0x31, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ]; assert!(DdsDecoder::new(&header[..]).is_err()); } } image-0.25.5/src/codecs/dxt.rs000064400000000000000000000277201046102023000142030ustar 00000000000000//! Decoding of DXT (S3TC) compression //! //! DXT is an image format that supports lossy compression //! //! # Related Links //! * - Description of the DXT compression OpenGL extensions. //! //! Note: this module only implements bare DXT encoding/decoding, it does not parse formats that can contain DXT files like .dds use std::io::{self, Read}; use crate::color::ColorType; use crate::error::{ImageError, ImageResult, ParameterError, ParameterErrorKind}; use crate::image::ImageDecoder; /// What version of DXT compression are we using? /// Note that DXT2 and DXT4 are left away as they're /// just DXT3 and DXT5 with premultiplied alpha #[derive(Clone, Copy, Debug, PartialEq, Eq)] pub(crate) enum DxtVariant { /// The DXT1 format. 48 bytes of RGB data in a 4x4 pixel square is /// compressed into an 8 byte block of DXT1 data DXT1, /// The DXT3 format. 64 bytes of RGBA data in a 4x4 pixel square is /// compressed into a 16 byte block of DXT3 data DXT3, /// The DXT5 format. 64 bytes of RGBA data in a 4x4 pixel square is /// compressed into a 16 byte block of DXT5 data DXT5, } impl DxtVariant { /// Returns the amount of bytes of raw image data /// that is encoded in a single DXTn block fn decoded_bytes_per_block(self) -> usize { match self { DxtVariant::DXT1 => 48, DxtVariant::DXT3 | DxtVariant::DXT5 => 64, } } /// Returns the amount of bytes per block of encoded DXTn data fn encoded_bytes_per_block(self) -> usize { match self { DxtVariant::DXT1 => 8, DxtVariant::DXT3 | DxtVariant::DXT5 => 16, } } /// Returns the color type that is stored in this DXT variant pub(crate) fn color_type(self) -> ColorType { match self { DxtVariant::DXT1 => ColorType::Rgb8, DxtVariant::DXT3 | DxtVariant::DXT5 => ColorType::Rgba8, } } } /// DXT decoder pub(crate) struct DxtDecoder { inner: R, width_blocks: u32, height_blocks: u32, variant: DxtVariant, row: u32, } impl DxtDecoder { /// Create a new DXT decoder that decodes from the stream ```r```. /// As DXT is often stored as raw buffers with the width/height /// somewhere else the width and height of the image need /// to be passed in ```width``` and ```height```, as well as the /// DXT variant in ```variant```. /// width and height are required to be powers of 2 and at least 4. /// otherwise an error will be returned pub(crate) fn new( r: R, width: u32, height: u32, variant: DxtVariant, ) -> Result, ImageError> { if width % 4 != 0 || height % 4 != 0 { // TODO: this is actually a bit of a weird case. We could return `DecodingError` but // it's not really the format that is wrong However, the encoder should surely return // `EncodingError` so it would be the logical choice for symmetry. return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } let width_blocks = width / 4; let height_blocks = height / 4; Ok(DxtDecoder { inner: r, width_blocks, height_blocks, variant, row: 0, }) } fn scanline_bytes(&self) -> u64 { self.variant.decoded_bytes_per_block() as u64 * u64::from(self.width_blocks) } fn read_scanline(&mut self, buf: &mut [u8]) -> io::Result { assert_eq!( u64::try_from(buf.len()), Ok( #[allow(deprecated)] self.scanline_bytes() ) ); let mut src = vec![0u8; self.variant.encoded_bytes_per_block() * self.width_blocks as usize]; self.inner.read_exact(&mut src)?; match self.variant { DxtVariant::DXT1 => decode_dxt1_row(&src, buf), DxtVariant::DXT3 => decode_dxt3_row(&src, buf), DxtVariant::DXT5 => decode_dxt5_row(&src, buf), } self.row += 1; Ok(buf.len()) } } // Note that, due to the way that DXT compression works, a scanline is considered to consist out of // 4 lines of pixels. impl ImageDecoder for DxtDecoder { fn dimensions(&self) -> (u32, u32) { (self.width_blocks * 4, self.height_blocks * 4) } fn color_type(&self) -> ColorType { self.variant.color_type() } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); #[allow(deprecated)] for chunk in buf.chunks_mut(self.scanline_bytes().max(1) as usize) { self.read_scanline(chunk)?; } Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } /** * Actual encoding/decoding logic below. */ type Rgb = [u8; 3]; /// decodes a 5-bit R, 6-bit G, 5-bit B 16-bit packed color value into 8-bit RGB /// mapping is done so min/max range values are preserved. So for 5-bit /// values 0x00 -> 0x00 and 0x1F -> 0xFF fn enc565_decode(value: u16) -> Rgb { let red = (value >> 11) & 0x1F; let green = (value >> 5) & 0x3F; let blue = (value) & 0x1F; [ (red * 0xFF / 0x1F) as u8, (green * 0xFF / 0x3F) as u8, (blue * 0xFF / 0x1F) as u8, ] } /* * Functions for decoding DXT compression */ /// Constructs the DXT5 alpha lookup table from the two alpha entries /// if alpha0 > alpha1, constructs a table of [a0, a1, 6 linearly interpolated values from a0 to a1] /// if alpha0 <= alpha1, constructs a table of [a0, a1, 4 linearly interpolated values from a0 to a1, 0, 0xFF] fn alpha_table_dxt5(alpha0: u8, alpha1: u8) -> [u8; 8] { let mut table = [alpha0, alpha1, 0, 0, 0, 0, 0, 0xFF]; if alpha0 > alpha1 { for i in 2..8u16 { table[i as usize] = (((8 - i) * u16::from(alpha0) + (i - 1) * u16::from(alpha1)) / 7) as u8; } } else { for i in 2..6u16 { table[i as usize] = (((6 - i) * u16::from(alpha0) + (i - 1) * u16::from(alpha1)) / 5) as u8; } } table } /// decodes an 8-byte dxt color block into the RGB channels of a 16xRGB or 16xRGBA block. /// source should have a length of 8, dest a length of 48 (RGB) or 64 (RGBA) fn decode_dxt_colors(source: &[u8], dest: &mut [u8], is_dxt1: bool) { // sanity checks, also enable the compiler to elide all following bound checks assert!(source.len() == 8 && (dest.len() == 48 || dest.len() == 64)); // calculate pitch to store RGB values in dest (3 for RGB, 4 for RGBA) let pitch = dest.len() / 16; // extract color data let color0 = u16::from(source[0]) | (u16::from(source[1]) << 8); let color1 = u16::from(source[2]) | (u16::from(source[3]) << 8); let color_table = u32::from(source[4]) | (u32::from(source[5]) << 8) | (u32::from(source[6]) << 16) | (u32::from(source[7]) << 24); // let color_table = source[4..8].iter().rev().fold(0, |t, &b| (t << 8) | b as u32); // decode the colors to rgb format let mut colors = [[0; 3]; 4]; colors[0] = enc565_decode(color0); colors[1] = enc565_decode(color1); // determine color interpolation method if color0 > color1 || !is_dxt1 { // linearly interpolate the other two color table entries for i in 0..3 { colors[2][i] = ((u16::from(colors[0][i]) * 2 + u16::from(colors[1][i]) + 1) / 3) as u8; colors[3][i] = ((u16::from(colors[0][i]) + u16::from(colors[1][i]) * 2 + 1) / 3) as u8; } } else { // linearly interpolate one other entry, keep the other at 0 for i in 0..3 { colors[2][i] = ((u16::from(colors[0][i]) + u16::from(colors[1][i]) + 1) / 2) as u8; } } // serialize the result. Every color is determined by looking up // two bits in color_table which identify which color to actually pick from the 4 possible colors for i in 0..16 { dest[i * pitch..i * pitch + 3] .copy_from_slice(&colors[(color_table >> (i * 2)) as usize & 3]); } } /// Decodes a 16-byte bock of dxt5 data to a 16xRGBA block fn decode_dxt5_block(source: &[u8], dest: &mut [u8]) { assert!(source.len() == 16 && dest.len() == 64); // extract alpha index table (stored as little endian 64-bit value) let alpha_table = source[2..8] .iter() .rev() .fold(0, |t, &b| (t << 8) | u64::from(b)); // alhpa level decode let alphas = alpha_table_dxt5(source[0], source[1]); // serialize alpha for i in 0..16 { dest[i * 4 + 3] = alphas[(alpha_table >> (i * 3)) as usize & 7]; } // handle colors decode_dxt_colors(&source[8..16], dest, false); } /// Decodes a 16-byte bock of dxt3 data to a 16xRGBA block fn decode_dxt3_block(source: &[u8], dest: &mut [u8]) { assert!(source.len() == 16 && dest.len() == 64); // extract alpha index table (stored as little endian 64-bit value) let alpha_table = source[0..8] .iter() .rev() .fold(0, |t, &b| (t << 8) | u64::from(b)); // serialize alpha (stored as 4-bit values) for i in 0..16 { dest[i * 4 + 3] = ((alpha_table >> (i * 4)) as u8 & 0xF) * 0x11; } // handle colors decode_dxt_colors(&source[8..16], dest, false); } /// Decodes a 8-byte bock of dxt5 data to a 16xRGB block fn decode_dxt1_block(source: &[u8], dest: &mut [u8]) { assert!(source.len() == 8 && dest.len() == 48); decode_dxt_colors(source, dest, true); } /// Decode a row of DXT1 data to four rows of RGB data. /// `source.len()` should be a multiple of 8, otherwise this panics. fn decode_dxt1_row(source: &[u8], dest: &mut [u8]) { assert!(source.len() % 8 == 0); let block_count = source.len() / 8; assert!(dest.len() >= block_count * 48); // contains the 16 decoded pixels per block let mut decoded_block = [0u8; 48]; for (x, encoded_block) in source.chunks(8).enumerate() { decode_dxt1_block(encoded_block, &mut decoded_block); // copy the values from the decoded block to linewise RGB layout for line in 0..4 { let offset = (block_count * line + x) * 12; dest[offset..offset + 12].copy_from_slice(&decoded_block[line * 12..(line + 1) * 12]); } } } /// Decode a row of DXT3 data to four rows of RGBA data. /// `source.len()` should be a multiple of 16, otherwise this panics. fn decode_dxt3_row(source: &[u8], dest: &mut [u8]) { assert!(source.len() % 16 == 0); let block_count = source.len() / 16; assert!(dest.len() >= block_count * 64); // contains the 16 decoded pixels per block let mut decoded_block = [0u8; 64]; for (x, encoded_block) in source.chunks(16).enumerate() { decode_dxt3_block(encoded_block, &mut decoded_block); // copy the values from the decoded block to linewise RGB layout for line in 0..4 { let offset = (block_count * line + x) * 16; dest[offset..offset + 16].copy_from_slice(&decoded_block[line * 16..(line + 1) * 16]); } } } /// Decode a row of DXT5 data to four rows of RGBA data. /// `source.len()` should be a multiple of 16, otherwise this panics. fn decode_dxt5_row(source: &[u8], dest: &mut [u8]) { assert!(source.len() % 16 == 0); let block_count = source.len() / 16; assert!(dest.len() >= block_count * 64); // contains the 16 decoded pixels per block let mut decoded_block = [0u8; 64]; for (x, encoded_block) in source.chunks(16).enumerate() { decode_dxt5_block(encoded_block, &mut decoded_block); // copy the values from the decoded block to linewise RGB layout for line in 0..4 { let offset = (block_count * line + x) * 16; dest[offset..offset + 16].copy_from_slice(&decoded_block[line * 16..(line + 1) * 16]); } } } image-0.25.5/src/codecs/farbfeld.rs000064400000000000000000000317721046102023000151530ustar 00000000000000//! Decoding of farbfeld images //! //! farbfeld is a lossless image format which is easy to parse, pipe and compress. //! //! It has the following format: //! //! | Bytes | Description | //! |--------|---------------------------------------------------------| //! | 8 | "farbfeld" magic value | //! | 4 | 32-Bit BE unsigned integer (width) | //! | 4 | 32-Bit BE unsigned integer (height) | //! | [2222] | 4⋅16-Bit BE unsigned integers [RGBA] / pixel, row-major | //! //! The RGB-data should be sRGB for best interoperability and not alpha-premultiplied. //! //! # Related Links //! * - the farbfeld specification use std::io::{self, Read, Seek, SeekFrom, Write}; use crate::color::ExtendedColorType; use crate::error::{ DecodingError, ImageError, ImageResult, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{self, ImageDecoder, ImageDecoderRect, ImageEncoder, ImageFormat}; use crate::ColorType; /// farbfeld Reader pub struct FarbfeldReader { width: u32, height: u32, inner: R, /// Relative to the start of the pixel data current_offset: u64, cached_byte: Option, } impl FarbfeldReader { fn new(mut buffered_read: R) -> ImageResult> { fn read_dimm(from: &mut R) -> ImageResult { let mut buf = [0u8; 4]; from.read_exact(&mut buf).map_err(|err| { ImageError::Decoding(DecodingError::new(ImageFormat::Farbfeld.into(), err)) })?; Ok(u32::from_be_bytes(buf)) } let mut magic = [0u8; 8]; buffered_read.read_exact(&mut magic).map_err(|err| { ImageError::Decoding(DecodingError::new(ImageFormat::Farbfeld.into(), err)) })?; if &magic != b"farbfeld" { return Err(ImageError::Decoding(DecodingError::new( ImageFormat::Farbfeld.into(), format!("Invalid magic: {magic:02x?}"), ))); } let reader = FarbfeldReader { width: read_dimm(&mut buffered_read)?, height: read_dimm(&mut buffered_read)?, inner: buffered_read, current_offset: 0, cached_byte: None, }; if crate::utils::check_dimension_overflow( reader.width, reader.height, // ExtendedColorType is always rgba16 8, ) { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Farbfeld.into(), UnsupportedErrorKind::GenericFeature(format!( "Image dimensions ({}x{}) are too large", reader.width, reader.height )), ), )); } Ok(reader) } } impl Read for FarbfeldReader { fn read(&mut self, mut buf: &mut [u8]) -> io::Result { let mut bytes_written = 0; if let Some(byte) = self.cached_byte.take() { buf[0] = byte; buf = &mut buf[1..]; bytes_written = 1; self.current_offset += 1; } if buf.len() == 1 { buf[0] = cache_byte(&mut self.inner, &mut self.cached_byte)?; bytes_written += 1; self.current_offset += 1; } else { for channel_out in buf.chunks_exact_mut(2) { consume_channel(&mut self.inner, channel_out)?; bytes_written += 2; self.current_offset += 2; } } Ok(bytes_written) } } impl Seek for FarbfeldReader { fn seek(&mut self, pos: SeekFrom) -> io::Result { fn parse_offset(original_offset: u64, end_offset: u64, pos: SeekFrom) -> Option { match pos { SeekFrom::Start(off) => i64::try_from(off) .ok()? .checked_sub(i64::try_from(original_offset).ok()?), SeekFrom::End(off) => { if off < i64::try_from(end_offset).unwrap_or(i64::MAX) { None } else { Some(i64::try_from(end_offset.checked_sub(original_offset)?).ok()? + off) } } SeekFrom::Current(off) => { if off < i64::try_from(original_offset).unwrap_or(i64::MAX) { None } else { Some(off) } } } } let original_offset = self.current_offset; let end_offset = u64::from(self.width) * u64::from(self.height) * 2; let offset_from_current = parse_offset(original_offset, end_offset, pos).ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidInput, "invalid seek to a negative or overflowing position", ) })?; // TODO: convert to seek_relative() once that gets stabilised self.inner.seek(SeekFrom::Current(offset_from_current))?; self.current_offset = if offset_from_current < 0 { original_offset.checked_sub(offset_from_current.wrapping_neg() as u64) } else { original_offset.checked_add(offset_from_current as u64) } .expect("This should've been checked above"); if self.current_offset < end_offset && self.current_offset % 2 == 1 { let curr = self.inner.seek(SeekFrom::Current(-1))?; cache_byte(&mut self.inner, &mut self.cached_byte)?; self.inner.seek(SeekFrom::Start(curr))?; } else { self.cached_byte = None; } Ok(original_offset) } } fn consume_channel(from: &mut R, mut to: &mut [u8]) -> io::Result<()> { let mut ibuf = [0u8; 2]; from.read_exact(&mut ibuf)?; to.write_all(&u16::from_be_bytes(ibuf).to_ne_bytes())?; Ok(()) } fn cache_byte(from: &mut R, cached_byte: &mut Option) -> io::Result { let mut obuf = [0u8; 2]; consume_channel(from, &mut obuf)?; *cached_byte = Some(obuf[1]); Ok(obuf[0]) } /// farbfeld decoder pub struct FarbfeldDecoder { reader: FarbfeldReader, } impl FarbfeldDecoder { /// Creates a new decoder that decodes from the stream ```r``` pub fn new(buffered_read: R) -> ImageResult> { Ok(FarbfeldDecoder { reader: FarbfeldReader::new(buffered_read)?, }) } } impl ImageDecoder for FarbfeldDecoder { fn dimensions(&self) -> (u32, u32) { (self.reader.width, self.reader.height) } fn color_type(&self) -> ColorType { ColorType::Rgba16 } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); self.reader.read_exact(buf)?; Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } impl ImageDecoderRect for FarbfeldDecoder { fn read_rect( &mut self, x: u32, y: u32, width: u32, height: u32, buf: &mut [u8], row_pitch: usize, ) -> ImageResult<()> { // A "scanline" (defined as "shortest non-caching read" in the doc) is just one channel in this case let start = self.reader.stream_position()?; image::load_rect( x, y, width, height, buf, row_pitch, self, 2, |s, scanline| s.reader.seek(SeekFrom::Start(scanline * 2)).map(|_| ()), |s, buf| s.reader.read_exact(buf), )?; self.reader.seek(SeekFrom::Start(start))?; Ok(()) } } /// farbfeld encoder pub struct FarbfeldEncoder { w: W, } impl FarbfeldEncoder { /// Create a new encoder that writes its output to ```w```. The writer should be buffered. pub fn new(buffered_writer: W) -> FarbfeldEncoder { FarbfeldEncoder { w: buffered_writer } } /// Encodes the image `data` (native endian) that has dimensions `width` and `height`. /// /// # Panics /// /// Panics if `width * height * 8 != data.len()`. #[track_caller] pub fn encode(self, data: &[u8], width: u32, height: u32) -> ImageResult<()> { let expected_buffer_len = (u64::from(width) * u64::from(height)).saturating_mul(8); assert_eq!( expected_buffer_len, data.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", data.len(), ); self.encode_impl(data, width, height)?; Ok(()) } fn encode_impl(mut self, data: &[u8], width: u32, height: u32) -> io::Result<()> { self.w.write_all(b"farbfeld")?; self.w.write_all(&width.to_be_bytes())?; self.w.write_all(&height.to_be_bytes())?; for channel in data.chunks_exact(2) { self.w .write_all(&u16::from_ne_bytes(channel.try_into().unwrap()).to_be_bytes())?; } Ok(()) } } impl ImageEncoder for FarbfeldEncoder { #[track_caller] fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { if color_type != ExtendedColorType::Rgba16 { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Farbfeld.into(), UnsupportedErrorKind::Color(color_type), ), )); } self.encode(buf, width, height) } } #[cfg(test)] mod tests { use crate::codecs::farbfeld::FarbfeldDecoder; use crate::ImageDecoderRect; use byteorder_lite::{ByteOrder, NativeEndian}; use std::io::{Cursor, Seek, SeekFrom}; static RECTANGLE_IN: &[u8] = b"farbfeld\ \x00\x00\x00\x02\x00\x00\x00\x03\ \xFF\x01\xFE\x02\xFD\x03\xFC\x04\xFB\x05\xFA\x06\xF9\x07\xF8\x08\ \xF7\x09\xF6\x0A\xF5\x0B\xF4\x0C\xF3\x0D\xF2\x0E\xF1\x0F\xF0\x10\ \xEF\x11\xEE\x12\xED\x13\xEC\x14\xEB\x15\xEA\x16\xE9\x17\xE8\x18"; #[test] fn read_rect_1x2() { static RECTANGLE_OUT: &[u16] = &[ 0xF30D, 0xF20E, 0xF10F, 0xF010, 0xEB15, 0xEA16, 0xE917, 0xE818, ]; read_rect(1, 1, 1, 2, RECTANGLE_OUT); } #[test] fn read_rect_2x2() { static RECTANGLE_OUT: &[u16] = &[ 0xFF01, 0xFE02, 0xFD03, 0xFC04, 0xFB05, 0xFA06, 0xF907, 0xF808, 0xF709, 0xF60A, 0xF50B, 0xF40C, 0xF30D, 0xF20E, 0xF10F, 0xF010, ]; read_rect(0, 0, 2, 2, RECTANGLE_OUT); } #[test] fn read_rect_2x1() { static RECTANGLE_OUT: &[u16] = &[ 0xEF11, 0xEE12, 0xED13, 0xEC14, 0xEB15, 0xEA16, 0xE917, 0xE818, ]; read_rect(0, 2, 2, 1, RECTANGLE_OUT); } #[test] fn read_rect_2x3() { static RECTANGLE_OUT: &[u16] = &[ 0xFF01, 0xFE02, 0xFD03, 0xFC04, 0xFB05, 0xFA06, 0xF907, 0xF808, 0xF709, 0xF60A, 0xF50B, 0xF40C, 0xF30D, 0xF20E, 0xF10F, 0xF010, 0xEF11, 0xEE12, 0xED13, 0xEC14, 0xEB15, 0xEA16, 0xE917, 0xE818, ]; read_rect(0, 0, 2, 3, RECTANGLE_OUT); } #[test] fn read_rect_in_stream() { static RECTANGLE_OUT: &[u16] = &[0xEF11, 0xEE12, 0xED13, 0xEC14]; let mut input = vec![]; input.extend_from_slice(b"This is a 31-byte-long prologue"); input.extend_from_slice(RECTANGLE_IN); let mut input_cur = Cursor::new(input); input_cur.seek(SeekFrom::Start(31)).unwrap(); let mut out_buf = [0u8; 64]; FarbfeldDecoder::new(input_cur) .unwrap() .read_rect(0, 2, 1, 1, &mut out_buf, 8) .unwrap(); let exp = degenerate_pixels(RECTANGLE_OUT); assert_eq!(&out_buf[..exp.len()], &exp[..]); } #[test] fn dimension_overflow() { let header = b"farbfeld\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF"; assert!(FarbfeldDecoder::new(Cursor::new(header)).is_err()); } fn read_rect(x: u32, y: u32, width: u32, height: u32, exp_wide: &[u16]) { let mut out_buf = [0u8; 64]; FarbfeldDecoder::new(Cursor::new(RECTANGLE_IN)) .unwrap() .read_rect(x, y, width, height, &mut out_buf, width as usize * 8) .unwrap(); let exp = degenerate_pixels(exp_wide); assert_eq!(&out_buf[..exp.len()], &exp[..]); } fn degenerate_pixels(exp_wide: &[u16]) -> Vec { let mut exp = vec![0u8; exp_wide.len() * 2]; NativeEndian::write_u16_into(exp_wide, &mut exp); exp } } image-0.25.5/src/codecs/gif.rs000064400000000000000000000552011046102023000141440ustar 00000000000000//! Decoding of GIF Images //! //! GIF (Graphics Interchange Format) is an image format that supports lossless compression. //! //! # Related Links //! * - The GIF Specification //! //! # Examples //! ```rust,no_run //! use image::codecs::gif::{GifDecoder, GifEncoder}; //! use image::{ImageDecoder, AnimationDecoder}; //! use std::fs::File; //! use std::io::BufReader; //! # fn main() -> std::io::Result<()> { //! // Decode a gif into frames //! let file_in = BufReader::new(File::open("foo.gif")?); //! let mut decoder = GifDecoder::new(file_in).unwrap(); //! let frames = decoder.into_frames(); //! let frames = frames.collect_frames().expect("error decoding gif"); //! //! // Encode frames into a gif and save to a file //! let mut file_out = File::open("out.gif")?; //! let mut encoder = GifEncoder::new(file_out); //! encoder.encode_frames(frames.into_iter()); //! # Ok(()) //! # } //! ``` #![allow(clippy::while_let_loop)] use std::io::{self, BufRead, Cursor, Read, Seek, Write}; use std::marker::PhantomData; use std::mem; use gif::ColorOutput; use gif::{DisposalMethod, Frame}; use crate::animation::{self, Ratio}; use crate::color::{ColorType, Rgba}; use crate::error::LimitError; use crate::error::LimitErrorKind; use crate::error::{ DecodingError, EncodingError, ImageError, ImageResult, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{AnimationDecoder, ImageDecoder, ImageFormat}; use crate::traits::Pixel; use crate::ExtendedColorType; use crate::ImageBuffer; use crate::Limits; /// GIF decoder pub struct GifDecoder { reader: gif::Decoder, limits: Limits, } impl GifDecoder { /// Creates a new decoder that decodes the input steam `r` pub fn new(r: R) -> ImageResult> { let mut decoder = gif::DecodeOptions::new(); decoder.set_color_output(ColorOutput::RGBA); Ok(GifDecoder { reader: decoder.read_info(r).map_err(ImageError::from_decoding)?, limits: Limits::no_limits(), }) } } /// Wrapper struct around a `Cursor>` #[allow(dead_code)] #[deprecated] pub struct GifReader(Cursor>, PhantomData); #[allow(deprecated)] impl Read for GifReader { fn read(&mut self, buf: &mut [u8]) -> io::Result { self.0.read(buf) } fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { if self.0.position() == 0 && buf.is_empty() { mem::swap(buf, self.0.get_mut()); Ok(buf.len()) } else { self.0.read_to_end(buf) } } } impl ImageDecoder for GifDecoder { fn dimensions(&self) -> (u32, u32) { ( u32::from(self.reader.width()), u32::from(self.reader.height()), ) } fn color_type(&self) -> ColorType { ColorType::Rgba8 } fn set_limits(&mut self, limits: Limits) -> ImageResult<()> { limits.check_support(&crate::LimitSupport::default())?; let (width, height) = self.dimensions(); limits.check_dimensions(width, height)?; self.limits = limits; Ok(()) } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); let frame = match self .reader .next_frame_info() .map_err(ImageError::from_decoding)? { Some(frame) => FrameInfo::new_from_frame(frame), None => { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::NoMoreData, ))) } }; let (width, height) = self.dimensions(); if frame.left == 0 && frame.width == width && (u64::from(frame.top) + u64::from(frame.height) <= u64::from(height)) { // If the frame matches the logical screen, or, as a more general case, // fits into it and touches its left and right borders, then // we can directly write it into the buffer without causing line wraparound. let line_length = usize::try_from(width) .unwrap() .checked_mul(self.color_type().bytes_per_pixel() as usize) .unwrap(); // isolate the portion of the buffer to read the frame data into. // the chunks above and below it are going to be zeroed. let (blank_top, rest) = buf.split_at_mut(line_length.checked_mul(frame.top as usize).unwrap()); let (buf, blank_bottom) = rest.split_at_mut(line_length.checked_mul(frame.height as usize).unwrap()); debug_assert_eq!(buf.len(), self.reader.buffer_size()); // this is only necessary in case the buffer is not zeroed for b in blank_top { *b = 0; } // fill the middle section with the frame data self.reader .read_into_buffer(buf) .map_err(ImageError::from_decoding)?; // this is only necessary in case the buffer is not zeroed for b in blank_bottom { *b = 0; } } else { // If the frame does not match the logical screen, read into an extra buffer // and 'insert' the frame from left/top to logical screen width/height. let buffer_size = (frame.width as usize) .checked_mul(frame.height as usize) .and_then(|s| s.checked_mul(4)) .ok_or(ImageError::Limits(LimitError::from_kind( LimitErrorKind::InsufficientMemory, )))?; self.limits.reserve_usize(buffer_size)?; let mut frame_buffer = vec![0; buffer_size]; self.limits.free_usize(buffer_size); self.reader .read_into_buffer(&mut frame_buffer[..]) .map_err(ImageError::from_decoding)?; let frame_buffer = ImageBuffer::from_raw(frame.width, frame.height, frame_buffer); let image_buffer = ImageBuffer::from_raw(width, height, buf); // `buffer_size` uses wrapping arithmetic, thus might not report the // correct storage requirement if the result does not fit in `usize`. // `ImageBuffer::from_raw` detects overflow and reports by returning `None`. if frame_buffer.is_none() || image_buffer.is_none() { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Gif.into(), UnsupportedErrorKind::GenericFeature(format!( "Image dimensions ({}, {}) are too large", frame.width, frame.height )), ), )); } let frame_buffer = frame_buffer.unwrap(); let mut image_buffer = image_buffer.unwrap(); for (x, y, pixel) in image_buffer.enumerate_pixels_mut() { let frame_x = x.wrapping_sub(frame.left); let frame_y = y.wrapping_sub(frame.top); if frame_x < frame.width && frame_y < frame.height { *pixel = *frame_buffer.get_pixel(frame_x, frame_y); } else { // this is only necessary in case the buffer is not zeroed *pixel = Rgba([0, 0, 0, 0]); } } } Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } struct GifFrameIterator { reader: gif::Decoder, width: u32, height: u32, non_disposed_frame: Option, Vec>>, limits: Limits, } impl GifFrameIterator { fn new(decoder: GifDecoder) -> GifFrameIterator { let (width, height) = decoder.dimensions(); let limits = decoder.limits.clone(); // intentionally ignore the background color for web compatibility GifFrameIterator { reader: decoder.reader, width, height, non_disposed_frame: None, limits, } } } impl Iterator for GifFrameIterator { type Item = ImageResult; fn next(&mut self) -> Option> { // The iterator always produces RGBA8 images const COLOR_TYPE: ColorType = ColorType::Rgba8; // Allocate the buffer for the previous frame. // This is done here and not in the constructor because // the constructor cannot return an error when the allocation limit is exceeded. if self.non_disposed_frame.is_none() { if let Err(e) = self .limits .reserve_buffer(self.width, self.height, COLOR_TYPE) { return Some(Err(e)); } self.non_disposed_frame = Some(ImageBuffer::from_pixel( self.width, self.height, Rgba([0, 0, 0, 0]), )); } // Bind to a variable to avoid repeated `.unwrap()` calls let non_disposed_frame = self.non_disposed_frame.as_mut().unwrap(); // begin looping over each frame let frame = match self.reader.next_frame_info() { Ok(frame_info) => { if let Some(frame) = frame_info { FrameInfo::new_from_frame(frame) } else { // no more frames return None; } } Err(err) => return Some(Err(ImageError::from_decoding(err))), }; // All allocations we do from now on will be freed at the end of this function. // Therefore, do not count them towards the persistent limits. // Instead, create a local instance of `Limits` for this function alone // which will be dropped along with all the buffers when they go out of scope. let mut local_limits = self.limits.clone(); // Check the allocation we're about to perform against the limits if let Err(e) = local_limits.reserve_buffer(frame.width, frame.height, COLOR_TYPE) { return Some(Err(e)); } // Allocate the buffer now that the limits allowed it let mut vec = vec![0; self.reader.buffer_size()]; if let Err(err) = self.reader.read_into_buffer(&mut vec) { return Some(Err(ImageError::from_decoding(err))); } // create the image buffer from the raw frame. // `buffer_size` uses wrapping arithmetic, thus might not report the // correct storage requirement if the result does not fit in `usize`. // on the other hand, `ImageBuffer::from_raw` detects overflow and // reports by returning `None`. let Some(mut frame_buffer) = ImageBuffer::from_raw(frame.width, frame.height, vec) else { return Some(Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Gif.into(), UnsupportedErrorKind::GenericFeature(format!( "Image dimensions ({}, {}) are too large", frame.width, frame.height )), ), ))); }; // blend the current frame with the non-disposed frame, then update // the non-disposed frame according to the disposal method. fn blend_and_dispose_pixel( dispose: DisposalMethod, previous: &mut Rgba, current: &mut Rgba, ) { let pixel_alpha = current.channels()[3]; if pixel_alpha == 0 { *current = *previous; } match dispose { DisposalMethod::Any | DisposalMethod::Keep => { // do not dispose // (keep pixels from this frame) // note: the `Any` disposal method is underspecified in the GIF // spec, but most viewers treat it identically to `Keep` *previous = *current; } DisposalMethod::Background => { // restore to background color // (background shows through transparent pixels in the next frame) *previous = Rgba([0, 0, 0, 0]); } DisposalMethod::Previous => { // restore to previous // (dispose frames leaving the last none disposal frame) } } } // if `frame_buffer`'s frame exactly matches the entire image, then // use it directly, else create a new buffer to hold the composited // image. let image_buffer = if (frame.left, frame.top) == (0, 0) && (self.width, self.height) == frame_buffer.dimensions() { for (x, y, pixel) in frame_buffer.enumerate_pixels_mut() { let previous_pixel = non_disposed_frame.get_pixel_mut(x, y); blend_and_dispose_pixel(frame.disposal_method, previous_pixel, pixel); } frame_buffer } else { // Check limits before allocating the buffer if let Err(e) = local_limits.reserve_buffer(self.width, self.height, COLOR_TYPE) { return Some(Err(e)); } ImageBuffer::from_fn(self.width, self.height, |x, y| { let frame_x = x.wrapping_sub(frame.left); let frame_y = y.wrapping_sub(frame.top); let previous_pixel = non_disposed_frame.get_pixel_mut(x, y); if frame_x < frame_buffer.width() && frame_y < frame_buffer.height() { let mut pixel = *frame_buffer.get_pixel(frame_x, frame_y); blend_and_dispose_pixel(frame.disposal_method, previous_pixel, &mut pixel); pixel } else { // out of bounds, return pixel from previous frame *previous_pixel } }) }; Some(Ok(animation::Frame::from_parts( image_buffer, 0, 0, frame.delay, ))) } } impl<'a, R: BufRead + Seek + 'a> AnimationDecoder<'a> for GifDecoder { fn into_frames(self) -> animation::Frames<'a> { animation::Frames::new(Box::new(GifFrameIterator::new(self))) } } struct FrameInfo { left: u32, top: u32, width: u32, height: u32, disposal_method: DisposalMethod, delay: animation::Delay, } impl FrameInfo { fn new_from_frame(frame: &Frame) -> FrameInfo { FrameInfo { left: u32::from(frame.left), top: u32::from(frame.top), width: u32::from(frame.width), height: u32::from(frame.height), disposal_method: frame.dispose, // frame.delay is in units of 10ms so frame.delay*10 is in ms delay: animation::Delay::from_ratio(Ratio::new(u32::from(frame.delay) * 10, 1)), } } } /// Number of repetitions for a GIF animation #[derive(Clone, Copy, Debug)] pub enum Repeat { /// Finite number of repetitions Finite(u16), /// Looping GIF Infinite, } impl Repeat { pub(crate) fn to_gif_enum(self) -> gif::Repeat { match self { Repeat::Finite(n) => gif::Repeat::Finite(n), Repeat::Infinite => gif::Repeat::Infinite, } } } /// GIF encoder. pub struct GifEncoder { w: Option, gif_encoder: Option>, speed: i32, repeat: Option, } impl GifEncoder { /// Creates a new GIF encoder with a speed of 1. This prioritizes quality over performance at any cost. pub fn new(w: W) -> GifEncoder { Self::new_with_speed(w, 1) } /// Create a new GIF encoder, and has the speed parameter `speed`. See /// [`Frame::from_rgba_speed`](https://docs.rs/gif/latest/gif/struct.Frame.html#method.from_rgba_speed) /// for more information. pub fn new_with_speed(w: W, speed: i32) -> GifEncoder { assert!( (1..=30).contains(&speed), "speed needs to be in the range [1, 30]" ); GifEncoder { w: Some(w), gif_encoder: None, speed, repeat: None, } } /// Set the repeat behaviour of the encoded GIF pub fn set_repeat(&mut self, repeat: Repeat) -> ImageResult<()> { if let Some(ref mut encoder) = self.gif_encoder { encoder .set_repeat(repeat.to_gif_enum()) .map_err(ImageError::from_encoding)?; } self.repeat = Some(repeat); Ok(()) } /// Encode a single image. pub fn encode( &mut self, data: &[u8], width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> { let (width, height) = self.gif_dimensions(width, height)?; match color { ExtendedColorType::Rgb8 => self.encode_gif(Frame::from_rgb(width, height, data)), ExtendedColorType::Rgba8 => { self.encode_gif(Frame::from_rgba(width, height, &mut data.to_owned())) } _ => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Gif.into(), UnsupportedErrorKind::Color(color), ), )), } } /// Encode one frame of animation. pub fn encode_frame(&mut self, img_frame: animation::Frame) -> ImageResult<()> { let frame = self.convert_frame(img_frame)?; self.encode_gif(frame) } /// Encodes Frames. /// Consider using `try_encode_frames` instead to encode an `animation::Frames` like iterator. pub fn encode_frames(&mut self, frames: F) -> ImageResult<()> where F: IntoIterator, { for img_frame in frames { self.encode_frame(img_frame)?; } Ok(()) } /// Try to encode a collection of `ImageResult` objects. /// Use this function to encode an `animation::Frames` like iterator. /// Whenever an `Err` item is encountered, that value is returned without further actions. pub fn try_encode_frames(&mut self, frames: F) -> ImageResult<()> where F: IntoIterator>, { for img_frame in frames { self.encode_frame(img_frame?)?; } Ok(()) } pub(crate) fn convert_frame( &mut self, img_frame: animation::Frame, ) -> ImageResult> { // get the delay before converting img_frame let frame_delay = img_frame.delay().into_ratio().to_integer(); // convert img_frame into RgbaImage let mut rbga_frame = img_frame.into_buffer(); let (width, height) = self.gif_dimensions(rbga_frame.width(), rbga_frame.height())?; // Create the gif::Frame from the animation::Frame let mut frame = Frame::from_rgba_speed(width, height, &mut rbga_frame, self.speed); // Saturate the conversion to u16::MAX instead of returning an error as that // would require a new special cased variant in ParameterErrorKind which most // likely couldn't be reused for other cases. This isn't a bad trade-off given // that the current algorithm is already lossy. frame.delay = (frame_delay / 10).try_into().unwrap_or(u16::MAX); Ok(frame) } fn gif_dimensions(&self, width: u32, height: u32) -> ImageResult<(u16, u16)> { fn inner_dimensions(width: u32, height: u32) -> Option<(u16, u16)> { let width = u16::try_from(width).ok()?; let height = u16::try_from(height).ok()?; Some((width, height)) } // TODO: this is not very idiomatic yet. Should return an EncodingError. inner_dimensions(width, height).ok_or_else(|| { ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, )) }) } pub(crate) fn encode_gif(&mut self, mut frame: Frame) -> ImageResult<()> { let gif_encoder; if let Some(ref mut encoder) = self.gif_encoder { gif_encoder = encoder; } else { let writer = self.w.take().unwrap(); let mut encoder = gif::Encoder::new(writer, frame.width, frame.height, &[]) .map_err(ImageError::from_encoding)?; if let Some(ref repeat) = self.repeat { encoder .set_repeat(repeat.to_gif_enum()) .map_err(ImageError::from_encoding)?; } self.gif_encoder = Some(encoder); gif_encoder = self.gif_encoder.as_mut().unwrap(); } frame.dispose = DisposalMethod::Background; gif_encoder .write_frame(&frame) .map_err(ImageError::from_encoding) } } impl ImageError { fn from_decoding(err: gif::DecodingError) -> ImageError { use gif::DecodingError::*; match err { err @ Format(_) => { ImageError::Decoding(DecodingError::new(ImageFormat::Gif.into(), err)) } Io(io_err) => ImageError::IoError(io_err), } } fn from_encoding(err: gif::EncodingError) -> ImageError { use gif::EncodingError::*; match err { err @ Format(_) => { ImageError::Encoding(EncodingError::new(ImageFormat::Gif.into(), err)) } Io(io_err) => ImageError::IoError(io_err), } } } #[cfg(test)] mod test { use super::*; #[test] fn frames_exceeding_logical_screen_size() { // This is a gif with 10x10 logical screen, but a 16x16 frame + 6px offset inside. let data = vec![ 0x47, 0x49, 0x46, 0x38, 0x39, 0x61, 0x0A, 0x00, 0x0A, 0x00, 0xF0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0E, 0xFF, 0x1F, 0x21, 0xF9, 0x04, 0x09, 0x64, 0x00, 0x00, 0x00, 0x2C, 0x06, 0x00, 0x06, 0x00, 0x10, 0x00, 0x10, 0x00, 0x00, 0x02, 0x23, 0x84, 0x8F, 0xA9, 0xBB, 0xE1, 0xE8, 0x42, 0x8A, 0x0F, 0x50, 0x79, 0xAE, 0xD1, 0xF9, 0x7A, 0xE8, 0x71, 0x5B, 0x48, 0x81, 0x64, 0xD5, 0x91, 0xCA, 0x89, 0x4D, 0x21, 0x63, 0x89, 0x4C, 0x09, 0x77, 0xF5, 0x6D, 0x14, 0x00, 0x3B, ]; let decoder = GifDecoder::new(Cursor::new(data)).unwrap(); let mut buf = vec![0u8; decoder.total_bytes() as usize]; assert!(decoder.read_image(&mut buf).is_ok()); } } image-0.25.5/src/codecs/hdr/decoder.rs000064400000000000000000000656571046102023000156010ustar 00000000000000use std::io::{self, Read}; use std::num::{ParseFloatError, ParseIntError}; use std::{error, fmt}; use crate::color::{ColorType, Rgb}; use crate::error::{ DecodingError, ImageError, ImageFormatHint, ImageResult, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageDecoder, ImageFormat}; /// Errors that can occur during decoding and parsing of a HDR image #[derive(Debug, Clone, PartialEq, Eq)] enum DecoderError { /// HDR's "#?RADIANCE" signature wrong or missing RadianceHdrSignatureInvalid, /// EOF before end of header TruncatedHeader, /// EOF instead of image dimensions TruncatedDimensions, /// A value couldn't be parsed UnparsableF32(LineType, ParseFloatError), /// A value couldn't be parsed UnparsableU32(LineType, ParseIntError), /// Not enough numbers in line LineTooShort(LineType), /// COLORCORR contains too many numbers in strict mode ExtraneousColorcorrNumbers, /// Dimensions line had too few elements DimensionsLineTooShort(usize, usize), /// Dimensions line had too many elements DimensionsLineTooLong(usize), /// The length of a scanline (1) wasn't a match for the specified length (2) WrongScanlineLength(usize, usize), /// First pixel of a scanline is a run length marker FirstPixelRlMarker, } impl fmt::Display for DecoderError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { DecoderError::RadianceHdrSignatureInvalid => { f.write_str("Radiance HDR signature not found") } DecoderError::TruncatedHeader => f.write_str("EOF in header"), DecoderError::TruncatedDimensions => f.write_str("EOF in dimensions line"), DecoderError::UnparsableF32(line, pe) => { f.write_fmt(format_args!("Cannot parse {line} value as f32: {pe}")) } DecoderError::UnparsableU32(line, pe) => { f.write_fmt(format_args!("Cannot parse {line} value as u32: {pe}")) } DecoderError::LineTooShort(line) => { f.write_fmt(format_args!("Not enough numbers in {line}")) } DecoderError::ExtraneousColorcorrNumbers => f.write_str("Extra numbers in COLORCORR"), DecoderError::DimensionsLineTooShort(elements, expected) => f.write_fmt(format_args!( "Dimensions line too short: have {elements} elements, expected {expected}" )), DecoderError::DimensionsLineTooLong(expected) => f.write_fmt(format_args!( "Dimensions line too long, expected {expected} elements" )), DecoderError::WrongScanlineLength(len, expected) => f.write_fmt(format_args!( "Wrong length of decoded scanline: got {len}, expected {expected}" )), DecoderError::FirstPixelRlMarker => { f.write_str("First pixel of a scanline shouldn't be run length marker") } } } } impl From for ImageError { fn from(e: DecoderError) -> ImageError { ImageError::Decoding(DecodingError::new(ImageFormat::Hdr.into(), e)) } } impl error::Error for DecoderError { fn source(&self) -> Option<&(dyn error::Error + 'static)> { match self { DecoderError::UnparsableF32(_, err) => Some(err), DecoderError::UnparsableU32(_, err) => Some(err), _ => None, } } } /// Lines which contain parsable data that can fail #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum LineType { Exposure, Pixaspect, Colorcorr, DimensionsHeight, DimensionsWidth, } impl fmt::Display for LineType { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(match self { LineType::Exposure => "EXPOSURE", LineType::Pixaspect => "PIXASPECT", LineType::Colorcorr => "COLORCORR", LineType::DimensionsHeight => "height dimension", LineType::DimensionsWidth => "width dimension", }) } } /// Radiance HDR file signature pub const SIGNATURE: &[u8] = b"#?RADIANCE"; const SIGNATURE_LENGTH: usize = 10; /// An Radiance HDR decoder #[derive(Debug)] pub struct HdrDecoder { r: R, width: u32, height: u32, meta: HdrMetadata, } /// Refer to [wikipedia](https://en.wikipedia.org/wiki/RGBE_image_format) #[repr(C)] #[derive(Clone, Copy, Debug, Default, PartialEq, Eq)] pub(crate) struct Rgbe8Pixel { /// Color components pub(crate) c: [u8; 3], /// Exponent pub(crate) e: u8, } /// Creates `Rgbe8Pixel` from components pub(crate) fn rgbe8(r: u8, g: u8, b: u8, e: u8) -> Rgbe8Pixel { Rgbe8Pixel { c: [r, g, b], e } } impl Rgbe8Pixel { /// Converts `Rgbe8Pixel` into `Rgb` linearly #[inline] pub(crate) fn to_hdr(self) -> Rgb { if self.e == 0 { Rgb([0.0, 0.0, 0.0]) } else { // let exp = f32::ldexp(1., self.e as isize - (128 + 8)); // unstable let exp = f32::exp2(>::from(self.e) - (128.0 + 8.0)); Rgb([ exp * >::from(self.c[0]), exp * >::from(self.c[1]), exp * >::from(self.c[2]), ]) } } } impl HdrDecoder { /// Reads Radiance HDR image header from stream ```r``` /// if the header is valid, creates `HdrDecoder` /// strict mode is enabled pub fn new(reader: R) -> ImageResult { HdrDecoder::with_strictness(reader, true) } /// Allows reading old Radiance HDR images pub fn new_nonstrict(reader: R) -> ImageResult { Self::with_strictness(reader, false) } /// Reads Radiance HDR image header from stream `reader`, /// if the header is valid, creates `HdrDecoder`. /// /// strict enables strict mode /// /// Warning! Reading wrong file in non-strict mode /// could consume file size worth of memory in the process. pub fn with_strictness(mut reader: R, strict: bool) -> ImageResult> { let mut attributes = HdrMetadata::new(); { // scope to make borrowck happy let r = &mut reader; if strict { let mut signature = [0; SIGNATURE_LENGTH]; r.read_exact(&mut signature)?; if signature != SIGNATURE { return Err(DecoderError::RadianceHdrSignatureInvalid.into()); } // no else // skip signature line ending read_line_u8(r)?; } else { // Old Radiance HDR files (*.pic) don't use signature // Let them be parsed in non-strict mode } // read header data until empty line loop { match read_line_u8(r)? { None => { // EOF before end of header return Err(DecoderError::TruncatedHeader.into()); } Some(line) => { if line.is_empty() { // end of header break; } else if line[0] == b'#' { // line[0] will not panic, line.len() == 0 is false here // skip comments continue; } // no else // process attribute line let line = String::from_utf8_lossy(&line[..]); attributes.update_header_info(&line, strict)?; } // <= Some(line) } // match read_line_u8() } // loop } // scope to end borrow of reader // parse dimensions let (width, height) = match read_line_u8(&mut reader)? { None => { // EOF instead of image dimensions return Err(DecoderError::TruncatedDimensions.into()); } Some(dimensions) => { let dimensions = String::from_utf8_lossy(&dimensions[..]); parse_dimensions_line(&dimensions, strict)? } }; // color type is always rgb8 if crate::utils::check_dimension_overflow(width, height, ColorType::Rgb8.bytes_per_pixel()) { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Hdr.into(), UnsupportedErrorKind::GenericFeature(format!( "Image dimensions ({width}x{height}) are too large" )), ), )); } Ok(HdrDecoder { r: reader, width, height, meta: HdrMetadata { width, height, ..attributes }, }) } // end with_strictness /// Returns file metadata. Refer to `HdrMetadata` for details. pub fn metadata(&self) -> HdrMetadata { self.meta.clone() } /// Consumes decoder and returns a vector of transformed pixels fn read_image_transform T>( mut self, f: F, output_slice: &mut [T], ) -> ImageResult<()> { assert_eq!( output_slice.len(), self.width as usize * self.height as usize ); // Don't read anything if image is empty if self.width == 0 || self.height == 0 { return Ok(()); } let chunks_iter = output_slice.chunks_mut(self.width as usize); let mut buf = vec![Default::default(); self.width as usize]; for chunk in chunks_iter { // read_scanline overwrites the entire buffer or returns an Err, // so not resetting the buffer here is ok. read_scanline(&mut self.r, &mut buf[..])?; for (dst, &pix) in chunk.iter_mut().zip(buf.iter()) { *dst = f(pix); } } Ok(()) } } impl ImageDecoder for HdrDecoder { fn dimensions(&self) -> (u32, u32) { (self.meta.width, self.meta.height) } fn color_type(&self) -> ColorType { ColorType::Rgb32F } fn read_image(self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); let mut img = vec![Rgb([0.0, 0.0, 0.0]); self.width as usize * self.height as usize]; self.read_image_transform(|pix| pix.to_hdr(), &mut img[..])?; for (i, Rgb(data)) in img.into_iter().enumerate() { buf[(i * 12)..][..12].copy_from_slice(bytemuck::cast_slice(&data)); } Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } // Precondition: buf.len() > 0 fn read_scanline(r: &mut R, buf: &mut [Rgbe8Pixel]) -> ImageResult<()> { assert!(!buf.is_empty()); let width = buf.len(); // first 4 bytes in scanline allow to determine compression method let fb = read_rgbe(r)?; if fb.c[0] == 2 && fb.c[1] == 2 && fb.c[2] < 128 { // denormalized pixel value (2,2,<128,_) indicates new per component RLE method // decode_component guarantees that offset is within 0 .. width // therefore we can skip bounds checking here, but we will not decode_component(r, width, |offset, value| buf[offset].c[0] = value)?; decode_component(r, width, |offset, value| buf[offset].c[1] = value)?; decode_component(r, width, |offset, value| buf[offset].c[2] = value)?; decode_component(r, width, |offset, value| buf[offset].e = value)?; } else { // old RLE method (it was considered old around 1991, should it be here?) decode_old_rle(r, fb, buf)?; } Ok(()) } #[inline(always)] fn read_byte(r: &mut R) -> io::Result { let mut buf = [0u8]; r.read_exact(&mut buf[..])?; Ok(buf[0]) } // Guarantees that first parameter of set_component will be within pos .. pos+width #[inline] fn decode_component( r: &mut R, width: usize, mut set_component: S, ) -> ImageResult<()> { let mut buf = [0; 128]; let mut pos = 0; while pos < width { // increment position by a number of decompressed values pos += { let rl = read_byte(r)?; if rl <= 128 { // sanity check if pos + rl as usize > width { return Err(DecoderError::WrongScanlineLength(pos + rl as usize, width).into()); } // read values r.read_exact(&mut buf[0..rl as usize])?; for (offset, &value) in buf[0..rl as usize].iter().enumerate() { set_component(pos + offset, value); } rl as usize } else { // run let rl = rl - 128; // sanity check if pos + rl as usize > width { return Err(DecoderError::WrongScanlineLength(pos + rl as usize, width).into()); } // fill with same value let value = read_byte(r)?; for offset in 0..rl as usize { set_component(pos + offset, value); } rl as usize } }; } if pos != width { return Err(DecoderError::WrongScanlineLength(pos, width).into()); } Ok(()) } // Decodes scanline, places it into buf // Precondition: buf.len() > 0 // fb - first 4 bytes of scanline fn decode_old_rle(r: &mut R, fb: Rgbe8Pixel, buf: &mut [Rgbe8Pixel]) -> ImageResult<()> { assert!(!buf.is_empty()); let width = buf.len(); // convenience function. // returns run length if pixel is a run length marker #[inline] fn rl_marker(pix: Rgbe8Pixel) -> Option { if pix.c == [1, 1, 1] { Some(pix.e as usize) } else { None } } // first pixel in scanline should not be run length marker // it is error if it is if rl_marker(fb).is_some() { return Err(DecoderError::FirstPixelRlMarker.into()); } buf[0] = fb; // set first pixel of scanline let mut x_off = 1; // current offset from beginning of a scanline let mut rl_mult = 1; // current run length multiplier let mut prev_pixel = fb; while x_off < width { let pix = read_rgbe(r)?; // it's harder to forget to increase x_off if I write this this way. x_off += { if let Some(rl) = rl_marker(pix) { // rl_mult takes care of consecutive RL markers let rl = rl * rl_mult; rl_mult *= 256; if x_off + rl <= width { // do run for b in &mut buf[x_off..x_off + rl] { *b = prev_pixel; } } else { return Err(DecoderError::WrongScanlineLength(x_off + rl, width).into()); }; rl // value to increase x_off by } else { rl_mult = 1; // chain of consecutive RL markers is broken prev_pixel = pix; buf[x_off] = pix; 1 // value to increase x_off by } }; } if x_off != width { return Err(DecoderError::WrongScanlineLength(x_off, width).into()); } Ok(()) } fn read_rgbe(r: &mut R) -> io::Result { let mut buf = [0u8; 4]; r.read_exact(&mut buf[..])?; Ok(Rgbe8Pixel { c: [buf[0], buf[1], buf[2]], e: buf[3], }) } /// Metadata for Radiance HDR image #[derive(Debug, Clone)] pub struct HdrMetadata { /// Width of decoded image. It could be either scanline length, /// or scanline count, depending on image orientation. pub width: u32, /// Height of decoded image. It depends on orientation too. pub height: u32, /// Orientation matrix. For standard orientation it is ((1,0),(0,1)) - left to right, top to bottom. /// First pair tells how resulting pixel coordinates change along a scanline. /// Second pair tells how they change from one scanline to the next. pub orientation: ((i8, i8), (i8, i8)), /// Divide color values by exposure to get to get physical radiance in /// watts/steradian/m2 /// /// Image may not contain physical data, even if this field is set. pub exposure: Option, /// Divide color values by corresponding tuple member (r, g, b) to get to get physical radiance /// in watts/steradian/m2 /// /// Image may not contain physical data, even if this field is set. pub color_correction: Option<(f32, f32, f32)>, /// Pixel height divided by pixel width pub pixel_aspect_ratio: Option, /// All lines contained in image header are put here. Ordering of lines is preserved. /// Lines in the form "key=value" are represented as ("key", "value"). /// All other lines are ("", "line") pub custom_attributes: Vec<(String, String)>, } impl HdrMetadata { fn new() -> HdrMetadata { HdrMetadata { width: 0, height: 0, orientation: ((1, 0), (0, 1)), exposure: None, color_correction: None, pixel_aspect_ratio: None, custom_attributes: vec![], } } // Updates header info, in strict mode returns error for malformed lines (no '=' separator) // unknown attributes are skipped fn update_header_info(&mut self, line: &str, strict: bool) -> ImageResult<()> { // split line at first '=' // old Radiance HDR files (*.pic) feature tabs in key, so vvv trim let maybe_key_value = split_at_first(line, "=").map(|(key, value)| (key.trim(), value)); // save all header lines in custom_attributes match maybe_key_value { Some((key, val)) => self .custom_attributes .push((key.to_owned(), val.to_owned())), None => self .custom_attributes .push((String::new(), line.to_owned())), } // parse known attributes match maybe_key_value { Some(("FORMAT", val)) => { if val.trim() != "32-bit_rle_rgbe" { // XYZE isn't supported yet return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Hdr.into(), UnsupportedErrorKind::Format(ImageFormatHint::Name(limit_string_len( val, 20, ))), ), )); } } Some(("EXPOSURE", val)) => { match val.trim().parse::() { Ok(v) => { self.exposure = Some(self.exposure.unwrap_or(1.0) * v); // all encountered exposure values should be multiplied } Err(parse_error) => { if strict { return Err(DecoderError::UnparsableF32( LineType::Exposure, parse_error, ) .into()); } // no else, skip this line in non-strict mode } }; } Some(("PIXASPECT", val)) => { match val.trim().parse::() { Ok(v) => { self.pixel_aspect_ratio = Some(self.pixel_aspect_ratio.unwrap_or(1.0) * v); // all encountered exposure values should be multiplied } Err(parse_error) => { if strict { return Err(DecoderError::UnparsableF32( LineType::Pixaspect, parse_error, ) .into()); } // no else, skip this line in non-strict mode } }; } Some(("COLORCORR", val)) => { let mut rgbcorr = [1.0, 1.0, 1.0]; match parse_space_separated_f32(val, &mut rgbcorr, LineType::Colorcorr) { Ok(extra_numbers) => { if strict && extra_numbers { return Err(DecoderError::ExtraneousColorcorrNumbers.into()); } // no else, just ignore extra numbers let (rc, gc, bc) = self.color_correction.unwrap_or((1.0, 1.0, 1.0)); self.color_correction = Some((rc * rgbcorr[0], gc * rgbcorr[1], bc * rgbcorr[2])); } Err(err) => { if strict { return Err(err); } // no else, skip malformed line in non-strict mode } } } None => { // old Radiance HDR files (*.pic) contain commands in a header // just skip them } _ => { // skip unknown attribute } } // match attributes Ok(()) } } fn parse_space_separated_f32(line: &str, vals: &mut [f32], line_tp: LineType) -> ImageResult { let mut nums = line.split_whitespace(); for val in vals.iter_mut() { if let Some(num) = nums.next() { match num.parse::() { Ok(v) => *val = v, Err(err) => return Err(DecoderError::UnparsableF32(line_tp, err).into()), } } else { // not enough numbers in line return Err(DecoderError::LineTooShort(line_tp).into()); } } Ok(nums.next().is_some()) } // Parses dimension line "-Y height +X width" // returns (width, height) or error fn parse_dimensions_line(line: &str, strict: bool) -> ImageResult<(u32, u32)> { const DIMENSIONS_COUNT: usize = 4; let mut dim_parts = line.split_whitespace(); let c1_tag = dim_parts .next() .ok_or(DecoderError::DimensionsLineTooShort(0, DIMENSIONS_COUNT))?; let c1_str = dim_parts .next() .ok_or(DecoderError::DimensionsLineTooShort(1, DIMENSIONS_COUNT))?; let c2_tag = dim_parts .next() .ok_or(DecoderError::DimensionsLineTooShort(2, DIMENSIONS_COUNT))?; let c2_str = dim_parts .next() .ok_or(DecoderError::DimensionsLineTooShort(3, DIMENSIONS_COUNT))?; if strict && dim_parts.next().is_some() { // extra data in dimensions line return Err(DecoderError::DimensionsLineTooLong(DIMENSIONS_COUNT).into()); } // no else // dimensions line is in the form "-Y 10 +X 20" // There are 8 possible orientations: +Y +X, +X -Y and so on match (c1_tag, c2_tag) { ("-Y", "+X") => { // Common orientation (left-right, top-down) // c1_str is height, c2_str is width let height = c1_str .parse::() .map_err(|pe| DecoderError::UnparsableU32(LineType::DimensionsHeight, pe))?; let width = c2_str .parse::() .map_err(|pe| DecoderError::UnparsableU32(LineType::DimensionsWidth, pe))?; Ok((width, height)) } _ => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Hdr.into(), UnsupportedErrorKind::GenericFeature(format!( "Orientation {} {}", limit_string_len(c1_tag, 4), limit_string_len(c2_tag, 4) )), ), )), } // final expression. Returns value } // Returns string with no more than len+3 characters fn limit_string_len(s: &str, len: usize) -> String { let s_char_len = s.chars().count(); if s_char_len > len { s.chars().take(len).chain("...".chars()).collect() } else { s.into() } } // Splits string into (before separator, after separator) tuple // or None if separator isn't found fn split_at_first<'a>(s: &'a str, separator: &str) -> Option<(&'a str, &'a str)> { match s.find(separator) { None | Some(0) => None, Some(p) if p >= s.len() - separator.len() => None, Some(p) => Some((&s[..p], &s[(p + separator.len())..])), } } // Reads input until b"\n" or EOF // Returns vector of read bytes NOT including end of line characters // or return None to indicate end of file fn read_line_u8(r: &mut R) -> io::Result>> { let mut ret = Vec::with_capacity(16); loop { let mut byte = [0]; if r.read(&mut byte)? == 0 || byte[0] == b'\n' { if ret.is_empty() && byte[0] != b'\n' { return Ok(None); } else { return Ok(Some(ret)); } } ret.push(byte[0]); } } #[cfg(test)] mod tests { use std::{borrow::Cow, io::Cursor}; use super::*; #[test] fn split_at_first_test() { assert_eq!(split_at_first(&Cow::Owned("".into()), "="), None); assert_eq!(split_at_first(&Cow::Owned("=".into()), "="), None); assert_eq!(split_at_first(&Cow::Owned("= ".into()), "="), None); assert_eq!( split_at_first(&Cow::Owned(" = ".into()), "="), Some((" ", " ")) ); assert_eq!( split_at_first(&Cow::Owned("EXPOSURE= ".into()), "="), Some(("EXPOSURE", " ")) ); assert_eq!( split_at_first(&Cow::Owned("EXPOSURE= =".into()), "="), Some(("EXPOSURE", " =")) ); assert_eq!( split_at_first(&Cow::Owned("EXPOSURE== =".into()), "=="), Some(("EXPOSURE", " =")) ); assert_eq!(split_at_first(&Cow::Owned("EXPOSURE".into()), ""), None); } #[test] fn read_line_u8_test() { let buf: Vec<_> = (&b"One\nTwo\nThree\nFour\n\n\n"[..]).into(); let input = &mut Cursor::new(buf); assert_eq!(&read_line_u8(input).unwrap().unwrap()[..], &b"One"[..]); assert_eq!(&read_line_u8(input).unwrap().unwrap()[..], &b"Two"[..]); assert_eq!(&read_line_u8(input).unwrap().unwrap()[..], &b"Three"[..]); assert_eq!(&read_line_u8(input).unwrap().unwrap()[..], &b"Four"[..]); assert_eq!(&read_line_u8(input).unwrap().unwrap()[..], &b""[..]); assert_eq!(&read_line_u8(input).unwrap().unwrap()[..], &b""[..]); assert_eq!(read_line_u8(input).unwrap(), None); } #[test] fn dimension_overflow() { let data = b"#?RADIANCE\nFORMAT=32-bit_rle_rgbe\n\n -Y 4294967295 +X 4294967295"; assert!(HdrDecoder::new(Cursor::new(data)).is_err()); assert!(HdrDecoder::new_nonstrict(Cursor::new(data)).is_err()); } } image-0.25.5/src/codecs/hdr/encoder.rs000064400000000000000000000413431046102023000155750ustar 00000000000000use crate::codecs::hdr::{rgbe8, Rgbe8Pixel, SIGNATURE}; use crate::color::Rgb; use crate::error::{EncodingError, ImageFormatHint, ImageResult}; use crate::{ExtendedColorType, ImageEncoder, ImageError, ImageFormat}; use std::cmp::Ordering; use std::io::{Result, Write}; /// Radiance HDR encoder pub struct HdrEncoder { w: W, } impl ImageEncoder for HdrEncoder { fn write_image( self, unaligned_bytes: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { match color_type { ExtendedColorType::Rgb32F => { let bytes_per_pixel = color_type.bits_per_pixel() as usize / 8; let rgbe_pixels = unaligned_bytes .chunks_exact(bytes_per_pixel) .map(|bytes| to_rgbe8(Rgb::(bytemuck::pod_read_unaligned(bytes)))); // the length will be checked inside encode_pixels self.encode_pixels(rgbe_pixels, width as usize, height as usize) } _ => Err(ImageError::Encoding(EncodingError::new( ImageFormatHint::Exact(ImageFormat::Hdr), "hdr format currently only supports the `Rgb32F` color type".to_string(), ))), } } } impl HdrEncoder { /// Creates encoder pub fn new(w: W) -> HdrEncoder { HdrEncoder { w } } /// Encodes the image ```rgb``` /// that has dimensions ```width``` and ```height``` pub fn encode(self, rgb: &[Rgb], width: usize, height: usize) -> ImageResult<()> { self.encode_pixels(rgb.iter().map(|&rgb| to_rgbe8(rgb)), width, height) } /// Encodes the image ```flattened_rgbe_pixels``` /// that has dimensions ```width``` and ```height```. /// The callback must return the color for the given flattened index of the pixel (row major). fn encode_pixels( mut self, mut flattened_rgbe_pixels: impl ExactSizeIterator, width: usize, height: usize, ) -> ImageResult<()> { assert!( flattened_rgbe_pixels.len() >= width * height, "not enough pixels provided" ); // bonus: this might elide some bounds checks let w = &mut self.w; w.write_all(SIGNATURE)?; w.write_all(b"\n")?; w.write_all(b"# Rust HDR encoder\n")?; w.write_all(b"FORMAT=32-bit_rle_rgbe\n\n")?; w.write_all(format!("-Y {height} +X {width}\n").as_bytes())?; if !(8..=32_768).contains(&width) { for pixel in flattened_rgbe_pixels { write_rgbe8(w, pixel)?; } } else { // new RLE marker contains scanline width let marker = rgbe8(2, 2, (width / 256) as u8, (width % 256) as u8); // buffers for encoded pixels let mut bufr = vec![0; width]; let mut bufg = vec![0; width]; let mut bufb = vec![0; width]; let mut bufe = vec![0; width]; let mut rle_buf = vec![0; width]; for _scanline_index in 0..height { assert!(flattened_rgbe_pixels.len() >= width); // may reduce the bound checks for ((((r, g), b), e), pixel) in bufr .iter_mut() .zip(bufg.iter_mut()) .zip(bufb.iter_mut()) .zip(bufe.iter_mut()) .zip(&mut flattened_rgbe_pixels) { *r = pixel.c[0]; *g = pixel.c[1]; *b = pixel.c[2]; *e = pixel.e; } write_rgbe8(w, marker)?; // New RLE encoding marker rle_buf.clear(); rle_compress(&bufr[..], &mut rle_buf); w.write_all(&rle_buf[..])?; rle_buf.clear(); rle_compress(&bufg[..], &mut rle_buf); w.write_all(&rle_buf[..])?; rle_buf.clear(); rle_compress(&bufb[..], &mut rle_buf); w.write_all(&rle_buf[..])?; rle_buf.clear(); rle_compress(&bufe[..], &mut rle_buf); w.write_all(&rle_buf[..])?; } } Ok(()) } } #[derive(Debug, PartialEq, Eq)] enum RunOrNot { Run(u8, usize), Norun(usize, usize), } use self::RunOrNot::{Norun, Run}; const RUN_MAX_LEN: usize = 127; const NORUN_MAX_LEN: usize = 128; struct RunIterator<'a> { data: &'a [u8], curidx: usize, } impl<'a> RunIterator<'a> { fn new(data: &'a [u8]) -> RunIterator<'a> { RunIterator { data, curidx: 0 } } } impl Iterator for RunIterator<'_> { type Item = RunOrNot; fn next(&mut self) -> Option { if self.curidx == self.data.len() { None } else { let cv = self.data[self.curidx]; let crun = self.data[self.curidx..] .iter() .take_while(|&&v| v == cv) .take(RUN_MAX_LEN) .count(); let ret = if crun > 2 { Run(cv, crun) } else { Norun(self.curidx, crun) }; self.curidx += crun; Some(ret) } } } struct NorunCombineIterator<'a> { runiter: RunIterator<'a>, prev: Option, } impl<'a> NorunCombineIterator<'a> { fn new(data: &'a [u8]) -> NorunCombineIterator<'a> { NorunCombineIterator { runiter: RunIterator::new(data), prev: None, } } } // Combines sequential noruns produced by RunIterator impl Iterator for NorunCombineIterator<'_> { type Item = RunOrNot; fn next(&mut self) -> Option { loop { match self.prev.take() { Some(Run(c, len)) => { // Just return stored run return Some(Run(c, len)); } Some(Norun(idx, len)) => { // Let's see if we need to continue norun match self.runiter.next() { Some(Norun(_, len1)) => { // norun continues let clen = len + len1; // combined length match clen.cmp(&NORUN_MAX_LEN) { Ordering::Equal => return Some(Norun(idx, clen)), Ordering::Greater => { // combined norun exceeds maximum length. store extra part of norun self.prev = Some(Norun(idx + NORUN_MAX_LEN, clen - NORUN_MAX_LEN)); // then return maximal norun return Some(Norun(idx, NORUN_MAX_LEN)); } Ordering::Less => { // len + len1 < NORUN_MAX_LEN self.prev = Some(Norun(idx, len + len1)); // combine and continue loop } } } Some(Run(c, len1)) => { // Run encountered. Store it self.prev = Some(Run(c, len1)); return Some(Norun(idx, len)); // and return combined norun } None => { // End of sequence return Some(Norun(idx, len)); // return combined norun } } } // End match self.prev.take() == Some(NoRun()) None => { // No norun to combine match self.runiter.next() { Some(Norun(idx, len)) => { self.prev = Some(Norun(idx, len)); // store for combine and continue the loop } Some(Run(c, len)) => { // Some run. Just return it return Some(Run(c, len)); } None => { // That's all, folks return None; } } } // End match self.prev.take() == None } // End match } // End loop } } // Appends RLE compressed ```data``` to ```rle``` fn rle_compress(data: &[u8], rle: &mut Vec) { rle.clear(); if data.is_empty() { rle.push(0); // Technically correct. It means read next 0 bytes. return; } // Task: split data into chunks of repeating (max 127) and non-repeating bytes (max 128) // Prepend non-repeating chunk with its length // Replace repeating byte with (run length + 128) and the byte for rnr in NorunCombineIterator::new(data) { match rnr { Run(c, len) => { assert!(len <= 127); rle.push(128u8 + len as u8); rle.push(c); } Norun(idx, len) => { assert!(len <= 128); rle.push(len as u8); rle.extend_from_slice(&data[idx..idx + len]); } } } } fn write_rgbe8(w: &mut W, v: Rgbe8Pixel) -> Result<()> { w.write_all(&[v.c[0], v.c[1], v.c[2], v.e]) } /// Converts ```Rgb``` into ```Rgbe8Pixel``` pub(crate) fn to_rgbe8(pix: Rgb) -> Rgbe8Pixel { let pix = pix.0; let mx = f32::max(pix[0], f32::max(pix[1], pix[2])); if mx <= 0.0 { Rgbe8Pixel { c: [0, 0, 0], e: 0 } } else { // let (frac, exp) = mx.frexp(); // unstable yet let exp = mx.log2().floor() as i32 + 1; let mul = f32::powi(2.0, exp); let mut conv = [0u8; 3]; for (cv, &sv) in conv.iter_mut().zip(pix.iter()) { *cv = f32::trunc(sv / mul * 256.0) as u8; } Rgbe8Pixel { c: conv, e: (exp + 128) as u8, } } } #[test] fn to_rgbe8_test() { use crate::codecs::hdr::rgbe8; let test_cases = vec![rgbe8(0, 0, 0, 0), rgbe8(1, 1, 128, 128)]; for &pix in &test_cases { assert_eq!(pix, to_rgbe8(pix.to_hdr())); } for mc in 128..255 { // TODO: use inclusive range when stable let pix = rgbe8(mc, mc, mc, 100); assert_eq!(pix, to_rgbe8(pix.to_hdr())); let pix = rgbe8(mc, 0, mc, 130); assert_eq!(pix, to_rgbe8(pix.to_hdr())); let pix = rgbe8(0, 0, mc, 140); assert_eq!(pix, to_rgbe8(pix.to_hdr())); let pix = rgbe8(1, 0, mc, 150); assert_eq!(pix, to_rgbe8(pix.to_hdr())); let pix = rgbe8(1, mc, 10, 128); assert_eq!(pix, to_rgbe8(pix.to_hdr())); for c in 0..255 { // Radiance HDR seems to be pre IEEE 754. // exponent can be -128 (represented as 0u8), so some colors cannot be represented in normalized f32 // Let's exclude exponent value of -128 (0u8) from testing let pix = rgbe8(1, mc, c, if c == 0 { 1 } else { c }); assert_eq!(pix, to_rgbe8(pix.to_hdr())); } } fn relative_dist(a: Rgb, b: Rgb) -> f32 { // maximal difference divided by maximal value let max_diff = a.0.iter() .zip(b.0.iter()) .fold(0.0, |diff, (&a, &b)| f32::max(diff, (a - b).abs())); let max_val = a.0.iter() .chain(b.0.iter()) .fold(0.0, |maxv, &a| f32::max(maxv, a)); if max_val == 0.0 { 0.0 } else { max_diff / max_val } } let test_values = vec![ 0.000_001, 0.000_02, 0.000_3, 0.004, 0.05, 0.6, 7.0, 80.0, 900.0, 1_000.0, 20_000.0, 300_000.0, ]; for &r in &test_values { for &g in &test_values { for &b in &test_values { let c1 = Rgb([r, g, b]); let c2 = to_rgbe8(c1).to_hdr(); let rel_dist = relative_dist(c1, c2); // Maximal value is normalized to the range 128..256, thus we have 1/128 precision assert!( rel_dist <= 1.0 / 128.0, "Relative distance ({}) exceeds 1/128 for {:?} and {:?}", rel_dist, c1, c2 ); } } } } #[test] fn runiterator_test() { let data = []; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), None); let data = [5]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Norun(0, 1))); assert_eq!(run_iter.next(), None); let data = [1, 1]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Norun(0, 2))); assert_eq!(run_iter.next(), None); let data = [0, 0, 0]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Run(0u8, 3))); assert_eq!(run_iter.next(), None); let data = [0, 0, 1, 1]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Norun(0, 2))); assert_eq!(run_iter.next(), Some(Norun(2, 2))); assert_eq!(run_iter.next(), None); let data = [0, 0, 0, 1, 1]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Run(0u8, 3))); assert_eq!(run_iter.next(), Some(Norun(3, 2))); assert_eq!(run_iter.next(), None); let data = [1, 2, 2, 2]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Norun(0, 1))); assert_eq!(run_iter.next(), Some(Run(2u8, 3))); assert_eq!(run_iter.next(), None); let data = [1, 1, 2, 2, 2]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Norun(0, 2))); assert_eq!(run_iter.next(), Some(Run(2u8, 3))); assert_eq!(run_iter.next(), None); let data = [2; 128]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Run(2u8, 127))); assert_eq!(run_iter.next(), Some(Norun(127, 1))); assert_eq!(run_iter.next(), None); let data = [2; 129]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Run(2u8, 127))); assert_eq!(run_iter.next(), Some(Norun(127, 2))); assert_eq!(run_iter.next(), None); let data = [2; 130]; let mut run_iter = RunIterator::new(&data[..]); assert_eq!(run_iter.next(), Some(Run(2u8, 127))); assert_eq!(run_iter.next(), Some(Run(2u8, 3))); assert_eq!(run_iter.next(), None); } #[test] fn noruncombine_test() { fn a(mut v: Vec, mut other: Vec) -> Vec { v.append(&mut other); v } let v = []; let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), None); let v = [1]; let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Norun(0, 1))); assert_eq!(rsi.next(), None); let v = [2, 2]; let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Norun(0, 2))); assert_eq!(rsi.next(), None); let v = [3, 3, 3]; let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Run(3, 3))); assert_eq!(rsi.next(), None); let v = [4, 4, 3, 3, 3]; let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Norun(0, 2))); assert_eq!(rsi.next(), Some(Run(3, 3))); assert_eq!(rsi.next(), None); let v = vec![40; 400]; let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Run(40, 127))); assert_eq!(rsi.next(), Some(Run(40, 127))); assert_eq!(rsi.next(), Some(Run(40, 127))); assert_eq!(rsi.next(), Some(Run(40, 19))); assert_eq!(rsi.next(), None); let v = a(a(vec![5; 3], vec![6; 129]), vec![7, 3, 7, 10, 255]); let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Run(5, 3))); assert_eq!(rsi.next(), Some(Run(6, 127))); assert_eq!(rsi.next(), Some(Norun(130, 7))); assert_eq!(rsi.next(), None); let v = a(a(vec![5; 2], vec![6; 129]), vec![7, 3, 7, 7, 255]); let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Norun(0, 2))); assert_eq!(rsi.next(), Some(Run(6, 127))); assert_eq!(rsi.next(), Some(Norun(129, 7))); assert_eq!(rsi.next(), None); let v: Vec<_> = std::iter::repeat(()) .flat_map(|_| (0..2)) .take(257) .collect(); let mut rsi = NorunCombineIterator::new(&v[..]); assert_eq!(rsi.next(), Some(Norun(0, 128))); assert_eq!(rsi.next(), Some(Norun(128, 128))); assert_eq!(rsi.next(), Some(Norun(256, 1))); assert_eq!(rsi.next(), None); } image-0.25.5/src/codecs/hdr/mod.rs000064400000000000000000000004671046102023000147370ustar 00000000000000//! Decoding of Radiance HDR Images //! //! A decoder for Radiance HDR images //! //! # Related Links //! //! * //! * //! mod decoder; mod encoder; pub use self::decoder::*; pub use self::encoder::*; image-0.25.5/src/codecs/ico/decoder.rs000064400000000000000000000447311046102023000155640ustar 00000000000000use byteorder_lite::{LittleEndian, ReadBytesExt}; use std::io::{BufRead, Read, Seek, SeekFrom}; use std::{error, fmt}; use crate::color::ColorType; use crate::error::{ DecodingError, ImageError, ImageResult, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageDecoder, ImageFormat}; use self::InnerDecoder::*; use crate::codecs::bmp::BmpDecoder; use crate::codecs::png::{PngDecoder, PNG_SIGNATURE}; /// Errors that can occur during decoding and parsing an ICO image or one of its enclosed images. #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum DecoderError { /// The ICO directory is empty NoEntries, /// The number of color planes (0 or 1), or the horizontal coordinate of the hotspot for CUR files too big. IcoEntryTooManyPlanesOrHotspot, /// The bit depth (may be 0 meaning unspecified), or the vertical coordinate of the hotspot for CUR files too big. IcoEntryTooManyBitsPerPixelOrHotspot, /// The entry is in PNG format and specified a length that is shorter than PNG header. PngShorterThanHeader, /// The enclosed PNG is not in RGBA, which is invalid: /. PngNotRgba, /// The entry is in BMP format and specified a data size that is not correct for the image and optional mask data. InvalidDataSize, /// The dimensions specified by the entry does not match the dimensions in the header of the enclosed image. ImageEntryDimensionMismatch { /// The mismatched subimage's type format: IcoEntryImageFormat, /// The dimensions specified by the entry entry: (u16, u16), /// The dimensions of the image itself image: (u32, u32), }, } impl fmt::Display for DecoderError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { DecoderError::NoEntries => f.write_str("ICO directory contains no image"), DecoderError::IcoEntryTooManyPlanesOrHotspot => { f.write_str("ICO image entry has too many color planes or too large hotspot value") } DecoderError::IcoEntryTooManyBitsPerPixelOrHotspot => f.write_str( "ICO image entry has too many bits per pixel or too large hotspot value", ), DecoderError::PngShorterThanHeader => { f.write_str("Entry specified a length that is shorter than PNG header!") } DecoderError::PngNotRgba => f.write_str("The PNG is not in RGBA format!"), DecoderError::InvalidDataSize => { f.write_str("ICO image data size did not match expected size") } DecoderError::ImageEntryDimensionMismatch { format, entry, image, } => f.write_fmt(format_args!( "Entry{entry:?} and {format}{image:?} dimensions do not match!" )), } } } impl From for ImageError { fn from(e: DecoderError) -> ImageError { ImageError::Decoding(DecodingError::new(ImageFormat::Ico.into(), e)) } } impl error::Error for DecoderError {} /// The image formats an ICO may contain #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum IcoEntryImageFormat { /// PNG in ARGB Png, /// BMP with optional alpha mask Bmp, } impl fmt::Display for IcoEntryImageFormat { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(match self { IcoEntryImageFormat::Png => "PNG", IcoEntryImageFormat::Bmp => "BMP", }) } } impl From for ImageFormat { fn from(val: IcoEntryImageFormat) -> Self { match val { IcoEntryImageFormat::Png => ImageFormat::Png, IcoEntryImageFormat::Bmp => ImageFormat::Bmp, } } } /// An ico decoder pub struct IcoDecoder { selected_entry: DirEntry, inner_decoder: InnerDecoder, } enum InnerDecoder { Bmp(BmpDecoder), Png(Box>), } #[derive(Clone, Copy, Default)] struct DirEntry { width: u8, height: u8, // We ignore some header fields as they will be replicated in the PNG, BMP and they are not // necessary for determining the best_entry. #[allow(unused)] color_count: u8, // Wikipedia has this to say: // Although Microsoft's technical documentation states that this value must be zero, the icon // encoder built into .NET (System.Drawing.Icon.Save) sets this value to 255. It appears that // the operating system ignores this value altogether. #[allow(unused)] reserved: u8, // We ignore some header fields as they will be replicated in the PNG, BMP and they are not // necessary for determining the best_entry. #[allow(unused)] num_color_planes: u16, bits_per_pixel: u16, image_length: u32, image_offset: u32, } impl IcoDecoder { /// Create a new decoder that decodes from the stream ```r``` pub fn new(mut r: R) -> ImageResult> { let entries = read_entries(&mut r)?; let entry = best_entry(entries)?; let decoder = entry.decoder(r)?; Ok(IcoDecoder { selected_entry: entry, inner_decoder: decoder, }) } } fn read_entries(r: &mut R) -> ImageResult> { let _reserved = r.read_u16::()?; let _type = r.read_u16::()?; let count = r.read_u16::()?; (0..count).map(|_| read_entry(r)).collect() } fn read_entry(r: &mut R) -> ImageResult { Ok(DirEntry { width: r.read_u8()?, height: r.read_u8()?, color_count: r.read_u8()?, reserved: r.read_u8()?, num_color_planes: { // This may be either the number of color planes (0 or 1), or the horizontal coordinate // of the hotspot for CUR files. let num = r.read_u16::()?; if num > 256 { return Err(DecoderError::IcoEntryTooManyPlanesOrHotspot.into()); } num }, bits_per_pixel: { // This may be either the bit depth (may be 0 meaning unspecified), // or the vertical coordinate of the hotspot for CUR files. let num = r.read_u16::()?; if num > 256 { return Err(DecoderError::IcoEntryTooManyBitsPerPixelOrHotspot.into()); } num }, image_length: r.read_u32::()?, image_offset: r.read_u32::()?, }) } /// Find the entry with the highest (color depth, size). fn best_entry(mut entries: Vec) -> ImageResult { let mut best = entries.pop().ok_or(DecoderError::NoEntries)?; let mut best_score = ( best.bits_per_pixel, u32::from(best.real_width()) * u32::from(best.real_height()), ); for entry in entries { let score = ( entry.bits_per_pixel, u32::from(entry.real_width()) * u32::from(entry.real_height()), ); if score > best_score { best = entry; best_score = score; } } Ok(best) } impl DirEntry { fn real_width(&self) -> u16 { match self.width { 0 => 256, w => u16::from(w), } } fn real_height(&self) -> u16 { match self.height { 0 => 256, h => u16::from(h), } } fn matches_dimensions(&self, width: u32, height: u32) -> bool { u32::from(self.real_width()) == width.min(256) && u32::from(self.real_height()) == height.min(256) } fn seek_to_start(&self, r: &mut R) -> ImageResult<()> { r.seek(SeekFrom::Start(u64::from(self.image_offset)))?; Ok(()) } fn is_png(&self, r: &mut R) -> ImageResult { self.seek_to_start(r)?; // Read the first 8 bytes to sniff the image. let mut signature = [0u8; 8]; r.read_exact(&mut signature)?; Ok(signature == PNG_SIGNATURE) } fn decoder(&self, mut r: R) -> ImageResult> { let is_png = self.is_png(&mut r)?; self.seek_to_start(&mut r)?; if is_png { Ok(Png(Box::new(PngDecoder::new(r)?))) } else { Ok(Bmp(BmpDecoder::new_with_ico_format(r)?)) } } } impl ImageDecoder for IcoDecoder { fn dimensions(&self) -> (u32, u32) { match self.inner_decoder { Bmp(ref decoder) => decoder.dimensions(), Png(ref decoder) => decoder.dimensions(), } } fn color_type(&self) -> ColorType { match self.inner_decoder { Bmp(ref decoder) => decoder.color_type(), Png(ref decoder) => decoder.color_type(), } } fn read_image(self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); match self.inner_decoder { Png(decoder) => { if self.selected_entry.image_length < PNG_SIGNATURE.len() as u32 { return Err(DecoderError::PngShorterThanHeader.into()); } // Check if the image dimensions match the ones in the image data. let (width, height) = decoder.dimensions(); if !self.selected_entry.matches_dimensions(width, height) { return Err(DecoderError::ImageEntryDimensionMismatch { format: IcoEntryImageFormat::Png, entry: ( self.selected_entry.real_width(), self.selected_entry.real_height(), ), image: (width, height), } .into()); } // Embedded PNG images can only be of the 32BPP RGBA format. // https://blogs.msdn.microsoft.com/oldnewthing/20101022-00/?p=12473/ if decoder.color_type() != ColorType::Rgba8 { return Err(DecoderError::PngNotRgba.into()); } decoder.read_image(buf) } Bmp(mut decoder) => { let (width, height) = decoder.dimensions(); if !self.selected_entry.matches_dimensions(width, height) { return Err(DecoderError::ImageEntryDimensionMismatch { format: IcoEntryImageFormat::Bmp, entry: ( self.selected_entry.real_width(), self.selected_entry.real_height(), ), image: (width, height), } .into()); } // The ICO decoder needs an alpha channel to apply the AND mask. if decoder.color_type() != ColorType::Rgba8 { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Bmp.into(), UnsupportedErrorKind::Color(decoder.color_type().into()), ), )); } decoder.read_image_data(buf)?; let r = decoder.reader(); let image_end = r.stream_position()?; let data_end = u64::from(self.selected_entry.image_offset) + u64::from(self.selected_entry.image_length); let mask_row_bytes = ((width + 31) / 32) * 4; let mask_length = u64::from(mask_row_bytes) * u64::from(height); // data_end should be image_end + the mask length (mask_row_bytes * height). // According to // https://devblogs.microsoft.com/oldnewthing/20101021-00/?p=12483 // the mask is required, but according to Wikipedia // https://en.wikipedia.org/wiki/ICO_(file_format) // the mask is not required. Unfortunately, Wikipedia does not have a citation // for that claim, so we can't be sure which is correct. if data_end >= image_end + mask_length { // If there's an AND mask following the image, read and apply it. for y in 0..height { let mut x = 0; for _ in 0..mask_row_bytes { // Apply the bits of each byte until we reach the end of the row. let mask_byte = r.read_u8()?; for bit in (0..8).rev() { if x >= width { break; } if mask_byte & (1 << bit) != 0 { // Set alpha channel to transparent. buf[((height - y - 1) * width + x) as usize * 4 + 3] = 0; } x += 1; } } } Ok(()) } else if data_end == image_end { // accept images with no mask data Ok(()) } else { Err(DecoderError::InvalidDataSize.into()) } } } } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } #[cfg(test)] mod test { use super::*; // Test if BMP images without alpha channel inside ICOs don't panic. // Because the test data is invalid decoding should produce an error. #[test] fn bmp_16_with_missing_alpha_channel() { let data = vec![ 0x00, 0x00, 0x01, 0x00, 0x01, 0x00, 0x0e, 0x04, 0xc3, 0x7e, 0x00, 0x00, 0x00, 0x00, 0x7c, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x00, 0x00, 0xf8, 0xff, 0xff, 0xff, 0x01, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8f, 0xf6, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x66, 0x74, 0x83, 0x70, 0x61, 0x76, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xeb, 0x00, 0x9b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00, 0x00, 0x62, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x0c, 0x00, 0x00, 0x00, 0xc3, 0x3f, 0x94, 0x61, 0xaa, 0x17, 0x4d, 0x8d, 0x79, 0x1d, 0x8b, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x2e, 0x28, 0x40, 0xe5, 0x9f, 0x4b, 0x4d, 0xe9, 0x87, 0xd3, 0xda, 0xd6, 0x89, 0x81, 0xc5, 0xa4, 0xa1, 0x60, 0x98, 0x31, 0xc7, 0x1d, 0xb6, 0x8f, 0x20, 0xc8, 0x3e, 0xee, 0xd8, 0xe4, 0x8f, 0xee, 0x7b, 0x48, 0x9b, 0x88, 0x25, 0x13, 0xda, 0xa4, 0x13, 0xa4, 0x00, 0x00, 0x00, 0x00, 0x40, 0x16, 0x01, 0xff, 0xff, 0xff, 0xff, 0xe9, 0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xa3, 0x66, 0x64, 0x41, 0x54, 0xa3, 0xa3, 0x00, 0x00, 0x00, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xa3, 0x66, 0x64, 0x41, 0x54, 0xa3, 0xa3, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8f, 0xf6, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x66, 0x74, 0x83, 0x70, 0x61, 0x76, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xeb, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00, 0x00, 0x62, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x94, 0xc8, 0x00, 0x02, 0x0c, 0x00, 0xff, 0xff, 0xc6, 0x84, 0x00, 0x2a, 0x75, 0x03, 0xa3, 0x05, 0xfb, 0xe1, 0x6e, 0xe8, 0x27, 0xd6, 0xd3, 0x96, 0xc1, 0xe4, 0x30, 0x0c, 0x05, 0xb9, 0xa3, 0x8b, 0x29, 0xda, 0xa4, 0xf1, 0x4d, 0xf3, 0xb2, 0x98, 0x2b, 0xe6, 0x93, 0x07, 0xf9, 0xca, 0x2b, 0xc2, 0x39, 0x20, 0xba, 0x7c, 0xa0, 0xb1, 0x43, 0xe6, 0xf9, 0xdc, 0xd1, 0xc2, 0x52, 0xdc, 0x41, 0xc1, 0x2f, 0x29, 0xf7, 0x46, 0x32, 0xda, 0x1b, 0x72, 0x8c, 0xe6, 0x2b, 0x01, 0xe5, 0x49, 0x21, 0x89, 0x89, 0xe4, 0x3d, 0xa1, 0xdb, 0x3b, 0x4a, 0x0b, 0x52, 0x86, 0x52, 0x33, 0x9d, 0xb2, 0xcf, 0x4a, 0x86, 0x53, 0xd7, 0xa9, 0x4b, 0xaf, 0x62, 0x06, 0x49, 0x53, 0x00, 0xc3, 0x3f, 0x94, 0x61, 0xaa, 0x17, 0x4d, 0x8d, 0x79, 0x1d, 0x8b, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x2e, 0x28, 0x40, 0xe5, 0x9f, 0x4b, 0x4d, 0xe9, 0x87, 0xd3, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xe7, 0xc5, 0x00, 0x02, 0x00, 0x00, 0x00, 0x06, 0x00, 0x0b, 0x00, 0x50, 0x31, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x76, 0x76, 0x01, 0x00, 0x00, 0x00, 0x76, 0x00, 0x00, 0x23, 0x3f, 0x52, 0x41, 0x44, 0x49, 0x41, 0x4e, 0x43, 0x45, 0x61, 0x50, 0x35, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x4d, 0x47, 0x49, 0x46, 0x38, 0x37, 0x61, 0x05, 0x50, 0x37, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc7, 0x37, 0x61, ]; let decoder = IcoDecoder::new(std::io::Cursor::new(&data)).unwrap(); let mut buf = vec![0; usize::try_from(decoder.total_bytes()).unwrap()]; assert!(decoder.read_image(&mut buf).is_err()); } } image-0.25.5/src/codecs/ico/encoder.rs000064400000000000000000000136211046102023000155700ustar 00000000000000use byteorder_lite::{LittleEndian, WriteBytesExt}; use std::borrow::Cow; use std::io::{self, Write}; use crate::error::{ImageError, ImageResult, ParameterError, ParameterErrorKind}; use crate::image::ImageEncoder; use crate::codecs::png::PngEncoder; use crate::ExtendedColorType; // Enum value indicating an ICO image (as opposed to a CUR image): const ICO_IMAGE_TYPE: u16 = 1; // The length of an ICO file ICONDIR structure, in bytes: const ICO_ICONDIR_SIZE: u32 = 6; // The length of an ICO file DIRENTRY structure, in bytes: const ICO_DIRENTRY_SIZE: u32 = 16; /// ICO encoder pub struct IcoEncoder { w: W, } /// An ICO image entry pub struct IcoFrame<'a> { // Pre-encoded PNG or BMP encoded_image: Cow<'a, [u8]>, // Stored as `0 => 256, n => n` width: u8, // Stored as `0 => 256, n => n` height: u8, color_type: ExtendedColorType, } impl<'a> IcoFrame<'a> { /// Construct a new `IcoFrame` using a pre-encoded PNG or BMP /// /// The `width` and `height` must be between 1 and 256 (inclusive). pub fn with_encoded( encoded_image: impl Into>, width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult { let encoded_image = encoded_image.into(); if !(1..=256).contains(&width) { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic(format!( "the image width must be `1..=256`, instead width {width} was provided", )), ))); } if !(1..=256).contains(&height) { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic(format!( "the image height must be `1..=256`, instead height {height} was provided", )), ))); } Ok(Self { encoded_image, width: width as u8, height: height as u8, color_type, }) } /// Construct a new `IcoFrame` by encoding `buf` as a PNG /// /// The `width` and `height` must be between 1 and 256 (inclusive) pub fn as_png( buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult { let mut image_data: Vec = Vec::new(); PngEncoder::new(&mut image_data).write_image(buf, width, height, color_type)?; let frame = Self::with_encoded(image_data, width, height, color_type)?; Ok(frame) } } impl IcoEncoder { /// Create a new encoder that writes its output to ```w```. pub fn new(w: W) -> IcoEncoder { IcoEncoder { w } } /// Takes some [`IcoFrame`]s and encodes them into an ICO. /// /// `images` is a list of images, usually ordered by dimension, which /// must be between 1 and 65535 (inclusive) in length. pub fn encode_images(mut self, images: &[IcoFrame<'_>]) -> ImageResult<()> { if !(1..=usize::from(u16::MAX)).contains(&images.len()) { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic(format!( "the number of images must be `1..=u16::MAX`, instead {} images were provided", images.len(), )), ))); } let num_images = images.len() as u16; let mut offset = ICO_ICONDIR_SIZE + (ICO_DIRENTRY_SIZE * (images.len() as u32)); write_icondir(&mut self.w, num_images)?; for image in images { write_direntry( &mut self.w, image.width, image.height, image.color_type, offset, image.encoded_image.len() as u32, )?; offset += image.encoded_image.len() as u32; } for image in images { self.w.write_all(&image.encoded_image)?; } Ok(()) } } impl ImageEncoder for IcoEncoder { /// Write an ICO image with the specified width, height, and color type. /// /// For color types with 16-bit per channel or larger, the contents of `buf` should be in /// native endian. /// /// WARNING: In image 0.23.14 and earlier this method erroneously expected buf to be in big endian. #[track_caller] fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); let image = IcoFrame::as_png(buf, width, height, color_type)?; self.encode_images(&[image]) } } fn write_icondir(w: &mut W, num_images: u16) -> io::Result<()> { // Reserved field (must be zero): w.write_u16::(0)?; // Image type (ICO or CUR): w.write_u16::(ICO_IMAGE_TYPE)?; // Number of images in the file: w.write_u16::(num_images)?; Ok(()) } fn write_direntry( w: &mut W, width: u8, height: u8, color: ExtendedColorType, data_start: u32, data_size: u32, ) -> io::Result<()> { // Image dimensions: w.write_u8(width)?; w.write_u8(height)?; // Number of colors in palette (or zero for no palette): w.write_u8(0)?; // Reserved field (must be zero): w.write_u8(0)?; // Color planes: w.write_u16::(0)?; // Bits per pixel: w.write_u16::(color.bits_per_pixel())?; // Image data size, in bytes: w.write_u32::(data_size)?; // Image data offset, in bytes: w.write_u32::(data_start)?; Ok(()) } image-0.25.5/src/codecs/ico/mod.rs000064400000000000000000000006161046102023000147300ustar 00000000000000//! Decoding and Encoding of ICO files //! //! A decoder and encoder for ICO (Windows Icon) image container files. //! //! # Related Links //! * //! * pub use self::decoder::IcoDecoder; #[allow(deprecated)] pub use self::encoder::{IcoEncoder, IcoFrame}; mod decoder; mod encoder; image-0.25.5/src/codecs/jpeg/decoder.rs000064400000000000000000000161651046102023000157370ustar 00000000000000use std::io::{BufRead, Seek}; use std::marker::PhantomData; use crate::color::ColorType; use crate::error::{ DecodingError, ImageError, ImageResult, LimitError, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageDecoder, ImageFormat}; use crate::metadata::Orientation; use crate::Limits; type ZuneColorSpace = zune_core::colorspace::ColorSpace; /// JPEG decoder pub struct JpegDecoder { input: Vec, orig_color_space: ZuneColorSpace, width: u16, height: u16, limits: Limits, orientation: Option, // For API compatibility with the previous jpeg_decoder wrapper. // Can be removed later, which would be an API break. phantom: PhantomData, } impl JpegDecoder { /// Create a new decoder that decodes from the stream ```r``` pub fn new(r: R) -> ImageResult> { let mut input = Vec::new(); let mut r = r; r.read_to_end(&mut input)?; let options = zune_core::options::DecoderOptions::default() .set_strict_mode(false) .set_max_width(usize::MAX) .set_max_height(usize::MAX); let mut decoder = zune_jpeg::JpegDecoder::new_with_options(input.as_slice(), options); decoder.decode_headers().map_err(ImageError::from_jpeg)?; // now that we've decoded the headers we can `.unwrap()` // all these functions that only fail if called before decoding the headers let (width, height) = decoder.dimensions().unwrap(); // JPEG can only express dimensions up to 65535x65535, so this conversion cannot fail let width: u16 = width.try_into().unwrap(); let height: u16 = height.try_into().unwrap(); let orig_color_space = decoder.get_output_colorspace().unwrap(); // Limits are disabled by default in the constructor for all decoders let limits = Limits::no_limits(); Ok(JpegDecoder { input, orig_color_space, width, height, limits, orientation: None, phantom: PhantomData, }) } } impl ImageDecoder for JpegDecoder { fn dimensions(&self) -> (u32, u32) { (u32::from(self.width), u32::from(self.height)) } fn color_type(&self) -> ColorType { ColorType::from_jpeg(self.orig_color_space) } fn icc_profile(&mut self) -> ImageResult>> { let mut decoder = zune_jpeg::JpegDecoder::new(&self.input); decoder.decode_headers().map_err(ImageError::from_jpeg)?; Ok(decoder.icc_profile()) } fn exif_metadata(&mut self) -> ImageResult>> { let mut decoder = zune_jpeg::JpegDecoder::new(&self.input); decoder.decode_headers().map_err(ImageError::from_jpeg)?; let exif = decoder.exif().cloned(); self.orientation = Some( exif.as_ref() .and_then(|exif| Orientation::from_exif_chunk(exif)) .unwrap_or(Orientation::NoTransforms), ); Ok(exif) } fn orientation(&mut self) -> ImageResult { // `exif_metadata` caches the orientation, so call it if `orientation` hasn't been set yet. if self.orientation.is_none() { let _ = self.exif_metadata()?; } Ok(self.orientation.unwrap()) } fn read_image(self, buf: &mut [u8]) -> ImageResult<()> { let advertised_len = self.total_bytes(); let actual_len = buf.len() as u64; if actual_len != advertised_len { return Err(ImageError::Decoding(DecodingError::new( ImageFormat::Jpeg.into(), format!( "Length of the decoded data {actual_len}\ doesn't match the advertised dimensions of the image\ that imply length {advertised_len}" ), ))); } let mut decoder = new_zune_decoder(&self.input, self.orig_color_space, self.limits); decoder.decode_into(buf).map_err(ImageError::from_jpeg)?; Ok(()) } fn set_limits(&mut self, limits: Limits) -> ImageResult<()> { limits.check_support(&crate::LimitSupport::default())?; let (width, height) = self.dimensions(); limits.check_dimensions(width, height)?; self.limits = limits; Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } impl ColorType { fn from_jpeg(colorspace: ZuneColorSpace) -> ColorType { let colorspace = to_supported_color_space(colorspace); use zune_core::colorspace::ColorSpace::*; match colorspace { // As of zune-jpeg 0.3.13 the output is always 8-bit, // but support for 16-bit JPEG might be added in the future. RGB => ColorType::Rgb8, RGBA => ColorType::Rgba8, Luma => ColorType::L8, LumaA => ColorType::La8, // to_supported_color_space() doesn't return any of the other variants _ => unreachable!(), } } } fn to_supported_color_space(orig: ZuneColorSpace) -> ZuneColorSpace { use zune_core::colorspace::ColorSpace::*; match orig { RGB | RGBA | Luma | LumaA => orig, // the rest is not supported by `image` so it will be converted to RGB during decoding _ => RGB, } } fn new_zune_decoder( input: &[u8], orig_color_space: ZuneColorSpace, limits: Limits, ) -> zune_jpeg::JpegDecoder<&[u8]> { let target_color_space = to_supported_color_space(orig_color_space); let mut options = zune_core::options::DecoderOptions::default() .jpeg_set_out_colorspace(target_color_space) .set_strict_mode(false); options = options.set_max_width(match limits.max_image_width { Some(max_width) => max_width as usize, // u32 to usize never truncates None => usize::MAX, }); options = options.set_max_height(match limits.max_image_height { Some(max_height) => max_height as usize, // u32 to usize never truncates None => usize::MAX, }); zune_jpeg::JpegDecoder::new_with_options(input, options) } impl ImageError { fn from_jpeg(err: zune_jpeg::errors::DecodeErrors) -> ImageError { use zune_jpeg::errors::DecodeErrors::*; match err { Unsupported(desc) => ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormat::Jpeg.into(), UnsupportedErrorKind::GenericFeature(format!("{desc:?}")), )), LargeDimensions(_) => ImageError::Limits(LimitError::from_kind( crate::error::LimitErrorKind::DimensionError, )), err => ImageError::Decoding(DecodingError::new(ImageFormat::Jpeg.into(), err)), } } } #[cfg(test)] mod tests { use super::*; use std::{fs, io::Cursor}; #[test] fn test_exif_orientation() { let data = fs::read("tests/images/jpg/portrait_2.jpg").unwrap(); let mut decoder = JpegDecoder::new(Cursor::new(data)).unwrap(); assert_eq!(decoder.orientation().unwrap(), Orientation::FlipHorizontal); } } image-0.25.5/src/codecs/jpeg/encoder.rs000064400000000000000000001006311046102023000157410ustar 00000000000000#![allow(clippy::too_many_arguments)] use std::borrow::Cow; use std::io::{self, Write}; use crate::error::{ ImageError, ImageResult, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageEncoder, ImageFormat}; use crate::utils::clamp; use crate::{ExtendedColorType, GenericImageView, ImageBuffer, Luma, Pixel, Rgb}; use super::entropy::build_huff_lut_const; use super::transform; use crate::traits::PixelWithColorType; // Markers // Baseline DCT static SOF0: u8 = 0xC0; // Huffman Tables static DHT: u8 = 0xC4; // Start of Image (standalone) static SOI: u8 = 0xD8; // End of image (standalone) static EOI: u8 = 0xD9; // Start of Scan static SOS: u8 = 0xDA; // Quantization Tables static DQT: u8 = 0xDB; // Application segments start and end static APP0: u8 = 0xE0; // section K.1 // table K.1 #[rustfmt::skip] static STD_LUMA_QTABLE: [u8; 64] = [ 16, 11, 10, 16, 24, 40, 51, 61, 12, 12, 14, 19, 26, 58, 60, 55, 14, 13, 16, 24, 40, 57, 69, 56, 14, 17, 22, 29, 51, 87, 80, 62, 18, 22, 37, 56, 68, 109, 103, 77, 24, 35, 55, 64, 81, 104, 113, 92, 49, 64, 78, 87, 103, 121, 120, 101, 72, 92, 95, 98, 112, 100, 103, 99, ]; // table K.2 #[rustfmt::skip] static STD_CHROMA_QTABLE: [u8; 64] = [ 17, 18, 24, 47, 99, 99, 99, 99, 18, 21, 26, 66, 99, 99, 99, 99, 24, 26, 56, 99, 99, 99, 99, 99, 47, 66, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, ]; // section K.3 // Code lengths and values for table K.3 static STD_LUMA_DC_CODE_LENGTHS: [u8; 16] = [ 0x00, 0x01, 0x05, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ]; static STD_LUMA_DC_VALUES: [u8; 12] = [ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, ]; static STD_LUMA_DC_HUFF_LUT: [(u8, u16); 256] = build_huff_lut_const(&STD_LUMA_DC_CODE_LENGTHS, &STD_LUMA_DC_VALUES); // Code lengths and values for table K.4 static STD_CHROMA_DC_CODE_LENGTHS: [u8; 16] = [ 0x00, 0x03, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, ]; static STD_CHROMA_DC_VALUES: [u8; 12] = [ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, ]; static STD_CHROMA_DC_HUFF_LUT: [(u8, u16); 256] = build_huff_lut_const(&STD_CHROMA_DC_CODE_LENGTHS, &STD_CHROMA_DC_VALUES); // Code lengths and values for table k.5 static STD_LUMA_AC_CODE_LENGTHS: [u8; 16] = [ 0x00, 0x02, 0x01, 0x03, 0x03, 0x02, 0x04, 0x03, 0x05, 0x05, 0x04, 0x04, 0x00, 0x00, 0x01, 0x7D, ]; static STD_LUMA_AC_VALUES: [u8; 162] = [ 0x01, 0x02, 0x03, 0x00, 0x04, 0x11, 0x05, 0x12, 0x21, 0x31, 0x41, 0x06, 0x13, 0x51, 0x61, 0x07, 0x22, 0x71, 0x14, 0x32, 0x81, 0x91, 0xA1, 0x08, 0x23, 0x42, 0xB1, 0xC1, 0x15, 0x52, 0xD1, 0xF0, 0x24, 0x33, 0x62, 0x72, 0x82, 0x09, 0x0A, 0x16, 0x17, 0x18, 0x19, 0x1A, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2A, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4A, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5A, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6A, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7A, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8A, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9A, 0xA2, 0xA3, 0xA4, 0xA5, 0xA6, 0xA7, 0xA8, 0xA9, 0xAA, 0xB2, 0xB3, 0xB4, 0xB5, 0xB6, 0xB7, 0xB8, 0xB9, 0xBA, 0xC2, 0xC3, 0xC4, 0xC5, 0xC6, 0xC7, 0xC8, 0xC9, 0xCA, 0xD2, 0xD3, 0xD4, 0xD5, 0xD6, 0xD7, 0xD8, 0xD9, 0xDA, 0xE1, 0xE2, 0xE3, 0xE4, 0xE5, 0xE6, 0xE7, 0xE8, 0xE9, 0xEA, 0xF1, 0xF2, 0xF3, 0xF4, 0xF5, 0xF6, 0xF7, 0xF8, 0xF9, 0xFA, ]; static STD_LUMA_AC_HUFF_LUT: [(u8, u16); 256] = build_huff_lut_const(&STD_LUMA_AC_CODE_LENGTHS, &STD_LUMA_AC_VALUES); // Code lengths and values for table k.6 static STD_CHROMA_AC_CODE_LENGTHS: [u8; 16] = [ 0x00, 0x02, 0x01, 0x02, 0x04, 0x04, 0x03, 0x04, 0x07, 0x05, 0x04, 0x04, 0x00, 0x01, 0x02, 0x77, ]; static STD_CHROMA_AC_VALUES: [u8; 162] = [ 0x00, 0x01, 0x02, 0x03, 0x11, 0x04, 0x05, 0x21, 0x31, 0x06, 0x12, 0x41, 0x51, 0x07, 0x61, 0x71, 0x13, 0x22, 0x32, 0x81, 0x08, 0x14, 0x42, 0x91, 0xA1, 0xB1, 0xC1, 0x09, 0x23, 0x33, 0x52, 0xF0, 0x15, 0x62, 0x72, 0xD1, 0x0A, 0x16, 0x24, 0x34, 0xE1, 0x25, 0xF1, 0x17, 0x18, 0x19, 0x1A, 0x26, 0x27, 0x28, 0x29, 0x2A, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4A, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5A, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6A, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7A, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8A, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9A, 0xA2, 0xA3, 0xA4, 0xA5, 0xA6, 0xA7, 0xA8, 0xA9, 0xAA, 0xB2, 0xB3, 0xB4, 0xB5, 0xB6, 0xB7, 0xB8, 0xB9, 0xBA, 0xC2, 0xC3, 0xC4, 0xC5, 0xC6, 0xC7, 0xC8, 0xC9, 0xCA, 0xD2, 0xD3, 0xD4, 0xD5, 0xD6, 0xD7, 0xD8, 0xD9, 0xDA, 0xE2, 0xE3, 0xE4, 0xE5, 0xE6, 0xE7, 0xE8, 0xE9, 0xEA, 0xF2, 0xF3, 0xF4, 0xF5, 0xF6, 0xF7, 0xF8, 0xF9, 0xFA, ]; static STD_CHROMA_AC_HUFF_LUT: [(u8, u16); 256] = build_huff_lut_const(&STD_CHROMA_AC_CODE_LENGTHS, &STD_CHROMA_AC_VALUES); static DCCLASS: u8 = 0; static ACCLASS: u8 = 1; static LUMADESTINATION: u8 = 0; static CHROMADESTINATION: u8 = 1; static LUMAID: u8 = 1; static CHROMABLUEID: u8 = 2; static CHROMAREDID: u8 = 3; /// The permutation of dct coefficients. #[rustfmt::skip] static UNZIGZAG: [u8; 64] = [ 0, 1, 8, 16, 9, 2, 3, 10, 17, 24, 32, 25, 18, 11, 4, 5, 12, 19, 26, 33, 40, 48, 41, 34, 27, 20, 13, 6, 7, 14, 21, 28, 35, 42, 49, 56, 57, 50, 43, 36, 29, 22, 15, 23, 30, 37, 44, 51, 58, 59, 52, 45, 38, 31, 39, 46, 53, 60, 61, 54, 47, 55, 62, 63, ]; /// A representation of a JPEG component #[derive(Copy, Clone)] struct Component { /// The Component's identifier id: u8, /// Horizontal sampling factor h: u8, /// Vertical sampling factor v: u8, /// The quantization table selector tq: u8, /// Index to the Huffman DC Table dc_table: u8, /// Index to the AC Huffman Table ac_table: u8, /// The dc prediction of the component _dc_pred: i32, } pub(crate) struct BitWriter { w: W, accumulator: u32, nbits: u8, } impl BitWriter { fn new(w: W) -> Self { BitWriter { w, accumulator: 0, nbits: 0, } } fn write_bits(&mut self, bits: u16, size: u8) -> io::Result<()> { if size == 0 { return Ok(()); } self.nbits += size; self.accumulator |= u32::from(bits) << (32 - self.nbits) as usize; while self.nbits >= 8 { let byte = self.accumulator >> 24; self.w.write_all(&[byte as u8])?; if byte == 0xFF { self.w.write_all(&[0x00])?; } self.nbits -= 8; self.accumulator <<= 8; } Ok(()) } fn pad_byte(&mut self) -> io::Result<()> { self.write_bits(0x7F, 7) } fn huffman_encode(&mut self, val: u8, table: &[(u8, u16); 256]) -> io::Result<()> { let (size, code) = table[val as usize]; assert!(size <= 16, "bad huffman value"); self.write_bits(code, size) } fn write_block( &mut self, block: &[i32; 64], prevdc: i32, dctable: &[(u8, u16); 256], actable: &[(u8, u16); 256], ) -> io::Result { // Differential DC encoding let dcval = block[0]; let diff = dcval - prevdc; let (size, value) = encode_coefficient(diff); self.huffman_encode(size, dctable)?; self.write_bits(value, size)?; // Figure F.2 let mut zero_run = 0; for &k in &UNZIGZAG[1..] { if block[k as usize] == 0 { zero_run += 1; } else { while zero_run > 15 { self.huffman_encode(0xF0, actable)?; zero_run -= 16; } let (size, value) = encode_coefficient(block[k as usize]); let symbol = (zero_run << 4) | size; self.huffman_encode(symbol, actable)?; self.write_bits(value, size)?; zero_run = 0; } } if block[UNZIGZAG[63] as usize] == 0 { self.huffman_encode(0x00, actable)?; } Ok(dcval) } fn write_marker(&mut self, marker: u8) -> io::Result<()> { self.w.write_all(&[0xFF, marker]) } fn write_segment(&mut self, marker: u8, data: &[u8]) -> io::Result<()> { self.w.write_all(&[0xFF, marker])?; self.w.write_all(&(data.len() as u16 + 2).to_be_bytes())?; self.w.write_all(data) } } /// Represents a unit in which the density of an image is measured #[derive(Clone, Copy, Debug, Eq, PartialEq)] pub enum PixelDensityUnit { /// Represents the absence of a unit, the values indicate only a /// [pixel aspect ratio](https://en.wikipedia.org/wiki/Pixel_aspect_ratio) PixelAspectRatio, /// Pixels per inch (2.54 cm) Inches, /// Pixels per centimeter Centimeters, } /// Represents the pixel density of an image /// /// For example, a 300 DPI image is represented by: /// /// ```rust /// use image::codecs::jpeg::*; /// let hdpi = PixelDensity::dpi(300); /// assert_eq!(hdpi, PixelDensity {density: (300,300), unit: PixelDensityUnit::Inches}) /// ``` #[derive(Clone, Copy, Debug, Eq, PartialEq)] pub struct PixelDensity { /// A couple of values for (Xdensity, Ydensity) pub density: (u16, u16), /// The unit in which the density is measured pub unit: PixelDensityUnit, } impl PixelDensity { /// Creates the most common pixel density type: /// the horizontal and the vertical density are equal, /// and measured in pixels per inch. #[must_use] pub fn dpi(density: u16) -> Self { PixelDensity { density: (density, density), unit: PixelDensityUnit::Inches, } } } impl Default for PixelDensity { /// Returns a pixel density with a pixel aspect ratio of 1 fn default() -> Self { PixelDensity { density: (1, 1), unit: PixelDensityUnit::PixelAspectRatio, } } } /// The representation of a JPEG encoder pub struct JpegEncoder { writer: BitWriter, components: Vec, tables: Vec<[u8; 64]>, luma_dctable: Cow<'static, [(u8, u16); 256]>, luma_actable: Cow<'static, [(u8, u16); 256]>, chroma_dctable: Cow<'static, [(u8, u16); 256]>, chroma_actable: Cow<'static, [(u8, u16); 256]>, pixel_density: PixelDensity, } impl JpegEncoder { /// Create a new encoder that writes its output to ```w``` pub fn new(w: W) -> JpegEncoder { JpegEncoder::new_with_quality(w, 75) } /// Create a new encoder that writes its output to ```w```, and has /// the quality parameter ```quality``` with a value in the range 1-100 /// where 1 is the worst and 100 is the best. pub fn new_with_quality(w: W, quality: u8) -> JpegEncoder { let components = vec![ Component { id: LUMAID, h: 1, v: 1, tq: LUMADESTINATION, dc_table: LUMADESTINATION, ac_table: LUMADESTINATION, _dc_pred: 0, }, Component { id: CHROMABLUEID, h: 1, v: 1, tq: CHROMADESTINATION, dc_table: CHROMADESTINATION, ac_table: CHROMADESTINATION, _dc_pred: 0, }, Component { id: CHROMAREDID, h: 1, v: 1, tq: CHROMADESTINATION, dc_table: CHROMADESTINATION, ac_table: CHROMADESTINATION, _dc_pred: 0, }, ]; // Derive our quantization table scaling value using the libjpeg algorithm let scale = u32::from(clamp(quality, 1, 100)); let scale = if scale < 50 { 5000 / scale } else { 200 - scale * 2 }; let mut tables = vec![STD_LUMA_QTABLE, STD_CHROMA_QTABLE]; tables.iter_mut().for_each(|t| { for v in t.iter_mut() { *v = clamp((u32::from(*v) * scale + 50) / 100, 1, u32::from(u8::MAX)) as u8; } }); JpegEncoder { writer: BitWriter::new(w), components, tables, luma_dctable: Cow::Borrowed(&STD_LUMA_DC_HUFF_LUT), luma_actable: Cow::Borrowed(&STD_LUMA_AC_HUFF_LUT), chroma_dctable: Cow::Borrowed(&STD_CHROMA_DC_HUFF_LUT), chroma_actable: Cow::Borrowed(&STD_CHROMA_AC_HUFF_LUT), pixel_density: PixelDensity::default(), } } /// Set the pixel density of the images the encoder will encode. /// If this method is not called, then a default pixel aspect ratio of 1x1 will be applied, /// and no DPI information will be stored in the image. pub fn set_pixel_density(&mut self, pixel_density: PixelDensity) { self.pixel_density = pixel_density; } /// Encodes the image stored in the raw byte buffer ```image``` /// that has dimensions ```width``` and ```height``` /// and ```ColorType``` ```c``` /// /// The Image in encoded with subsampling ratio 4:2:2 /// /// # Panics /// /// Panics if `width * height * color_type.bytes_per_pixel() != image.len()`. #[track_caller] pub fn encode( &mut self, image: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, image.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", image.len(), ); match color_type { ExtendedColorType::L8 => { let image: ImageBuffer, _> = ImageBuffer::from_raw(width, height, image).unwrap(); self.encode_image(&image) } ExtendedColorType::Rgb8 => { let image: ImageBuffer, _> = ImageBuffer::from_raw(width, height, image).unwrap(); self.encode_image(&image) } _ => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Jpeg.into(), UnsupportedErrorKind::Color(color_type), ), )), } } /// Encodes the given image. /// /// As a special feature this does not require the whole image to be present in memory at the /// same time such that it may be computed on the fly, which is why this method exists on this /// encoder but not on others. Instead the encoder will iterate over 8-by-8 blocks of pixels at /// a time, inspecting each pixel exactly once. You can rely on this behaviour when calling /// this method. /// /// The Image in encoded with subsampling ratio 4:2:2 pub fn encode_image(&mut self, image: &I) -> ImageResult<()> where I::Pixel: PixelWithColorType, { let n = I::Pixel::CHANNEL_COUNT; let color_type = I::Pixel::COLOR_TYPE; let num_components = if n == 1 || n == 2 { 1 } else { 3 }; self.writer.write_marker(SOI)?; let mut buf = Vec::new(); build_jfif_header(&mut buf, self.pixel_density); self.writer.write_segment(APP0, &buf)?; build_frame_header( &mut buf, 8, // TODO: not idiomatic yet. Should be an EncodingError and mention jpg. Further it // should check dimensions prior to writing. u16::try_from(image.width()).map_err(|_| { ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, )) })?, u16::try_from(image.height()).map_err(|_| { ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, )) })?, &self.components[..num_components], ); self.writer.write_segment(SOF0, &buf)?; assert_eq!(self.tables.len(), 2); let numtables = if num_components == 1 { 1 } else { 2 }; for (i, table) in self.tables[..numtables].iter().enumerate() { build_quantization_segment(&mut buf, 8, i as u8, table); self.writer.write_segment(DQT, &buf)?; } build_huffman_segment( &mut buf, DCCLASS, LUMADESTINATION, &STD_LUMA_DC_CODE_LENGTHS, &STD_LUMA_DC_VALUES, ); self.writer.write_segment(DHT, &buf)?; build_huffman_segment( &mut buf, ACCLASS, LUMADESTINATION, &STD_LUMA_AC_CODE_LENGTHS, &STD_LUMA_AC_VALUES, ); self.writer.write_segment(DHT, &buf)?; if num_components == 3 { build_huffman_segment( &mut buf, DCCLASS, CHROMADESTINATION, &STD_CHROMA_DC_CODE_LENGTHS, &STD_CHROMA_DC_VALUES, ); self.writer.write_segment(DHT, &buf)?; build_huffman_segment( &mut buf, ACCLASS, CHROMADESTINATION, &STD_CHROMA_AC_CODE_LENGTHS, &STD_CHROMA_AC_VALUES, ); self.writer.write_segment(DHT, &buf)?; } build_scan_header(&mut buf, &self.components[..num_components]); self.writer.write_segment(SOS, &buf)?; if ExtendedColorType::Rgb8 == color_type || ExtendedColorType::Rgba8 == color_type { self.encode_rgb(image) } else { self.encode_gray(image) }?; self.writer.pad_byte()?; self.writer.write_marker(EOI)?; Ok(()) } fn encode_gray(&mut self, image: &I) -> io::Result<()> { let mut yblock = [0u8; 64]; let mut y_dcprev = 0; let mut dct_yblock = [0i32; 64]; for y in (0..image.height()).step_by(8) { for x in (0..image.width()).step_by(8) { copy_blocks_gray(image, x, y, &mut yblock); // Level shift and fdct // Coeffs are scaled by 8 transform::fdct(&yblock, &mut dct_yblock); // Quantization for (i, dct) in dct_yblock.iter_mut().enumerate() { *dct = ((*dct / 8) as f32 / f32::from(self.tables[0][i])).round() as i32; } let la = &*self.luma_actable; let ld = &*self.luma_dctable; y_dcprev = self.writer.write_block(&dct_yblock, y_dcprev, ld, la)?; } } Ok(()) } fn encode_rgb(&mut self, image: &I) -> io::Result<()> { let mut y_dcprev = 0; let mut cb_dcprev = 0; let mut cr_dcprev = 0; let mut dct_yblock = [0i32; 64]; let mut dct_cb_block = [0i32; 64]; let mut dct_cr_block = [0i32; 64]; let mut yblock = [0u8; 64]; let mut cb_block = [0u8; 64]; let mut cr_block = [0u8; 64]; for y in (0..image.height()).step_by(8) { for x in (0..image.width()).step_by(8) { // RGB -> YCbCr copy_blocks_ycbcr(image, x, y, &mut yblock, &mut cb_block, &mut cr_block); // Level shift and fdct // Coeffs are scaled by 8 transform::fdct(&yblock, &mut dct_yblock); transform::fdct(&cb_block, &mut dct_cb_block); transform::fdct(&cr_block, &mut dct_cr_block); // Quantization for i in 0usize..64 { dct_yblock[i] = ((dct_yblock[i] / 8) as f32 / f32::from(self.tables[0][i])).round() as i32; dct_cb_block[i] = ((dct_cb_block[i] / 8) as f32 / f32::from(self.tables[1][i])) .round() as i32; dct_cr_block[i] = ((dct_cr_block[i] / 8) as f32 / f32::from(self.tables[1][i])) .round() as i32; } let la = &*self.luma_actable; let ld = &*self.luma_dctable; let cd = &*self.chroma_dctable; let ca = &*self.chroma_actable; y_dcprev = self.writer.write_block(&dct_yblock, y_dcprev, ld, la)?; cb_dcprev = self.writer.write_block(&dct_cb_block, cb_dcprev, cd, ca)?; cr_dcprev = self.writer.write_block(&dct_cr_block, cr_dcprev, cd, ca)?; } } Ok(()) } } impl ImageEncoder for JpegEncoder { #[track_caller] fn write_image( mut self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { self.encode(buf, width, height, color_type) } } fn build_jfif_header(m: &mut Vec, density: PixelDensity) { m.clear(); m.extend_from_slice(b"JFIF"); m.extend_from_slice(&[ 0, 0x01, 0x02, match density.unit { PixelDensityUnit::PixelAspectRatio => 0x00, PixelDensityUnit::Inches => 0x01, PixelDensityUnit::Centimeters => 0x02, }, ]); m.extend_from_slice(&density.density.0.to_be_bytes()); m.extend_from_slice(&density.density.1.to_be_bytes()); m.extend_from_slice(&[0, 0]); } fn build_frame_header( m: &mut Vec, precision: u8, width: u16, height: u16, components: &[Component], ) { m.clear(); m.push(precision); m.extend_from_slice(&height.to_be_bytes()); m.extend_from_slice(&width.to_be_bytes()); m.push(components.len() as u8); for &comp in components { let hv = (comp.h << 4) | comp.v; m.extend_from_slice(&[comp.id, hv, comp.tq]); } } fn build_scan_header(m: &mut Vec, components: &[Component]) { m.clear(); m.push(components.len() as u8); for &comp in components { let tables = (comp.dc_table << 4) | comp.ac_table; m.extend_from_slice(&[comp.id, tables]); } // spectral start and end, approx. high and low m.extend_from_slice(&[0, 63, 0]); } fn build_huffman_segment( m: &mut Vec, class: u8, destination: u8, numcodes: &[u8; 16], values: &[u8], ) { m.clear(); let tcth = (class << 4) | destination; m.push(tcth); m.extend_from_slice(numcodes); let sum: usize = numcodes.iter().map(|&x| x as usize).sum(); assert_eq!(sum, values.len()); m.extend_from_slice(values); } fn build_quantization_segment(m: &mut Vec, precision: u8, identifier: u8, qtable: &[u8; 64]) { m.clear(); let p = if precision == 8 { 0 } else { 1 }; let pqtq = (p << 4) | identifier; m.push(pqtq); for &i in &UNZIGZAG[..] { m.push(qtable[i as usize]); } } fn encode_coefficient(coefficient: i32) -> (u8, u16) { let mut magnitude = coefficient.unsigned_abs() as u16; let mut num_bits = 0u8; while magnitude > 0 { magnitude >>= 1; num_bits += 1; } let mask = (1 << num_bits as usize) - 1; let val = if coefficient < 0 { (coefficient - 1) as u16 & mask } else { coefficient as u16 & mask }; (num_bits, val) } #[inline] fn rgb_to_ycbcr(pixel: P) -> (u8, u8, u8) { use crate::traits::Primitive; use num_traits::cast::ToPrimitive; let [r, g, b] = pixel.to_rgb().0; let max: f32 = P::Subpixel::DEFAULT_MAX_VALUE.to_f32().unwrap(); let r: f32 = r.to_f32().unwrap(); let g: f32 = g.to_f32().unwrap(); let b: f32 = b.to_f32().unwrap(); // Coefficients from JPEG File Interchange Format (Version 1.02), multiplied for 255 maximum. let y = 76.245 / max * r + 149.685 / max * g + 29.07 / max * b; let cb = -43.0185 / max * r - 84.4815 / max * g + 127.5 / max * b + 128.; let cr = 127.5 / max * r - 106.7685 / max * g - 20.7315 / max * b + 128.; (y as u8, cb as u8, cr as u8) } /// Returns the pixel at (x,y) if (x,y) is in the image, /// otherwise the closest pixel in the image #[inline] fn pixel_at_or_near(source: &I, x: u32, y: u32) -> I::Pixel { if source.in_bounds(x, y) { source.get_pixel(x, y) } else { source.get_pixel(x.min(source.width() - 1), y.min(source.height() - 1)) } } fn copy_blocks_ycbcr( source: &I, x0: u32, y0: u32, yb: &mut [u8; 64], cbb: &mut [u8; 64], crb: &mut [u8; 64], ) { for y in 0..8 { for x in 0..8 { let pixel = pixel_at_or_near(source, x + x0, y + y0); let (yc, cb, cr) = rgb_to_ycbcr(pixel); yb[(y * 8 + x) as usize] = yc; cbb[(y * 8 + x) as usize] = cb; crb[(y * 8 + x) as usize] = cr; } } } fn copy_blocks_gray(source: &I, x0: u32, y0: u32, gb: &mut [u8; 64]) { use num_traits::cast::ToPrimitive; for y in 0..8 { for x in 0..8 { let pixel = pixel_at_or_near(source, x0 + x, y0 + y); let [luma] = pixel.to_luma().0; gb[(y * 8 + x) as usize] = luma.to_u8().unwrap(); } } } #[cfg(test)] mod tests { use std::io::Cursor; #[cfg(feature = "benchmarks")] extern crate test; #[cfg(feature = "benchmarks")] use test::Bencher; use crate::error::ParameterErrorKind::DimensionMismatch; use crate::image::ImageDecoder; use crate::{ExtendedColorType, ImageEncoder, ImageError}; use super::super::JpegDecoder; use super::{ build_frame_header, build_huffman_segment, build_jfif_header, build_quantization_segment, build_scan_header, Component, JpegEncoder, PixelDensity, DCCLASS, LUMADESTINATION, STD_LUMA_DC_CODE_LENGTHS, STD_LUMA_DC_VALUES, }; fn decode(encoded: &[u8]) -> Vec { let decoder = JpegDecoder::new(Cursor::new(encoded)).expect("Could not decode image"); let mut decoded = vec![0; decoder.total_bytes() as usize]; decoder .read_image(&mut decoded) .expect("Could not decode image"); decoded } #[test] fn roundtrip_sanity_check() { // create a 1x1 8-bit image buffer containing a single red pixel let img = [255u8, 0, 0]; // encode it into a memory buffer let mut encoded_img = Vec::new(); { let encoder = JpegEncoder::new_with_quality(&mut encoded_img, 100); encoder .write_image(&img, 1, 1, ExtendedColorType::Rgb8) .expect("Could not encode image"); } // decode it from the memory buffer { let decoded = decode(&encoded_img); // note that, even with the encode quality set to 100, we do not get the same image // back. Therefore, we're going to assert that it's at least red-ish: assert_eq!(3, decoded.len()); assert!(decoded[0] > 0x80); assert!(decoded[1] < 0x80); assert!(decoded[2] < 0x80); } } #[test] fn grayscale_roundtrip_sanity_check() { // create a 2x2 8-bit image buffer containing a white diagonal let img = [255u8, 0, 0, 255]; // encode it into a memory buffer let mut encoded_img = Vec::new(); { let encoder = JpegEncoder::new_with_quality(&mut encoded_img, 100); encoder .write_image(&img[..], 2, 2, ExtendedColorType::L8) .expect("Could not encode image"); } // decode it from the memory buffer { let decoded = decode(&encoded_img); // note that, even with the encode quality set to 100, we do not get the same image // back. Therefore, we're going to assert that the diagonal is at least white-ish: assert_eq!(4, decoded.len()); assert!(decoded[0] > 0x80); assert!(decoded[1] < 0x80); assert!(decoded[2] < 0x80); assert!(decoded[3] > 0x80); } } #[test] fn jfif_header_density_check() { let mut buffer = Vec::new(); build_jfif_header(&mut buffer, PixelDensity::dpi(300)); assert_eq!( buffer, vec![ b'J', b'F', b'I', b'F', 0, 1, 2, // JFIF version 1.2 1, // density is in dpi 300u16.to_be_bytes()[0], 300u16.to_be_bytes()[1], 300u16.to_be_bytes()[0], 300u16.to_be_bytes()[1], 0, 0, // No thumbnail ] ); } #[test] fn test_image_too_large() { // JPEG cannot encode images larger than 65,535×65,535 // create a 65,536×1 8-bit black image buffer let img = [0; 65_536]; // Try to encode an image that is too large let mut encoded = Vec::new(); let encoder = JpegEncoder::new_with_quality(&mut encoded, 100); let result = encoder.write_image(&img, 65_536, 1, ExtendedColorType::L8); match result { Err(ImageError::Parameter(err)) => { assert_eq!(err.kind(), DimensionMismatch) } other => { panic!( "Encoding an image that is too large should return a DimensionError \ it returned {:?} instead", other ) } } } #[test] fn test_build_jfif_header() { let mut buf = vec![]; let density = PixelDensity::dpi(100); build_jfif_header(&mut buf, density); assert_eq!( buf, [0x4A, 0x46, 0x49, 0x46, 0x00, 0x01, 0x02, 0x01, 0, 100, 0, 100, 0, 0] ); } #[test] fn test_build_frame_header() { let mut buf = vec![]; let components = vec![ Component { id: 1, h: 1, v: 1, tq: 5, dc_table: 5, ac_table: 5, _dc_pred: 0, }, Component { id: 2, h: 1, v: 1, tq: 4, dc_table: 4, ac_table: 4, _dc_pred: 0, }, ]; build_frame_header(&mut buf, 5, 100, 150, &components); assert_eq!( buf, [5, 0, 150, 0, 100, 2, 1, 1 << 4 | 1, 5, 2, 1 << 4 | 1, 4] ); } #[test] fn test_build_scan_header() { let mut buf = vec![]; let components = vec![ Component { id: 1, h: 1, v: 1, tq: 5, dc_table: 5, ac_table: 5, _dc_pred: 0, }, Component { id: 2, h: 1, v: 1, tq: 4, dc_table: 4, ac_table: 4, _dc_pred: 0, }, ]; build_scan_header(&mut buf, &components); assert_eq!(buf, [2, 1, 5 << 4 | 5, 2, 4 << 4 | 4, 0, 63, 0]); } #[test] fn test_build_huffman_segment() { let mut buf = vec![]; build_huffman_segment( &mut buf, DCCLASS, LUMADESTINATION, &STD_LUMA_DC_CODE_LENGTHS, &STD_LUMA_DC_VALUES, ); assert_eq!( buf, vec![ 0, 0, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ] ); } #[test] fn test_build_quantization_segment() { let mut buf = vec![]; let qtable = [0u8; 64]; build_quantization_segment(&mut buf, 8, 1, &qtable); let mut expected = vec![]; expected.push(1); expected.extend_from_slice(&[0; 64]); assert_eq!(buf, expected) } #[cfg(feature = "benchmarks")] #[bench] fn bench_jpeg_encoder_new(b: &mut Bencher) { b.iter(|| { let mut y = vec![]; let _x = JpegEncoder::new(&mut y); }) } } image-0.25.5/src/codecs/jpeg/entropy.rs000064400000000000000000000030401046102023000160160ustar 00000000000000/// Given an array containing the number of codes of each code length, /// this function generates the huffman codes lengths and their respective /// code lengths as specified by the JPEG spec. const fn derive_codes_and_sizes(bits: &[u8; 16]) -> ([u8; 256], [u16; 256]) { let mut huffsize = [0u8; 256]; let mut huffcode = [0u16; 256]; let mut k = 0; // Annex C.2 // Figure C.1 // Generate table of individual code lengths let mut i = 0; while i < 16 { let mut j = 0; while j < bits[i as usize] { huffsize[k] = i + 1; k += 1; j += 1; } i += 1; } huffsize[k] = 0; // Annex C.2 // Figure C.2 // Generate table of huffman codes k = 0; let mut code = 0u16; let mut size = huffsize[0]; while huffsize[k] != 0 { huffcode[k] = code; code += 1; k += 1; if huffsize[k] == size { continue; } // FIXME there is something wrong with this code let diff = huffsize[k].wrapping_sub(size); code = if diff < 16 { code << diff as usize } else { 0 }; size = size.wrapping_add(diff); } (huffsize, huffcode) } pub(crate) const fn build_huff_lut_const(bits: &[u8; 16], huffval: &[u8]) -> [(u8, u16); 256] { let mut lut = [(17u8, 0u16); 256]; let (huffsize, huffcode) = derive_codes_and_sizes(bits); let mut i = 0; while i < huffval.len() { lut[huffval[i] as usize] = (huffsize[i], huffcode[i]); i += 1; } lut } image-0.25.5/src/codecs/jpeg/mod.rs000064400000000000000000000007171046102023000151050ustar 00000000000000//! Decoding and Encoding of JPEG Images //! //! JPEG (Joint Photographic Experts Group) is an image format that supports lossy compression. //! This module implements the Baseline JPEG standard. //! //! # Related Links //! * - The JPEG specification //! pub use self::decoder::JpegDecoder; pub use self::encoder::{JpegEncoder, PixelDensity, PixelDensityUnit}; mod decoder; mod encoder; mod entropy; mod transform; image-0.25.5/src/codecs/jpeg/transform.rs000064400000000000000000000173311046102023000163410ustar 00000000000000/* fdct is a Rust translation of jfdctint.c from the Independent JPEG Group's libjpeg version 9a obtained from http://www.ijg.org/files/jpegsr9a.zip It comes with the following conditions of distribution and use: In plain English: 1. We don't promise that this software works. (But if you find any bugs, please let us know!) 2. You can use this software for whatever you want. You don't have to pay us. 3. You may not pretend that you wrote this software. If you use it in a program, you must acknowledge somewhere in your documentation that you've used the IJG code. In legalese: The authors make NO WARRANTY or representation, either express or implied, with respect to this software, its quality, accuracy, merchantability, or fitness for a particular purpose. This software is provided "AS IS", and you, its user, assume the entire risk as to its quality and accuracy. This software is copyright (C) 1991-2014, Thomas G. Lane, Guido Vollbeding. All Rights Reserved except as specified below. Permission is hereby granted to use, copy, modify, and distribute this software (or portions thereof) for any purpose, without fee, subject to these conditions: (1) If any part of the source code for this software is distributed, then this README file must be included, with this copyright and no-warranty notice unaltered; and any additions, deletions, or changes to the original files must be clearly indicated in accompanying documentation. (2) If only executable code is distributed, then the accompanying documentation must state that "this software is based in part on the work of the Independent JPEG Group". (3) Permission for use of this software is granted only if the user accepts full responsibility for any undesirable consequences; the authors accept NO LIABILITY for damages of any kind. These conditions apply to any software derived from or based on the IJG code, not just to the unmodified library. If you use our work, you ought to acknowledge us. Permission is NOT granted for the use of any IJG author's name or company name in advertising or publicity relating to this software or products derived from it. This software may be referred to only as "the Independent JPEG Group's software". We specifically permit and encourage the use of this software as the basis of commercial products, provided that all warranty or liability claims are assumed by the product vendor. */ static CONST_BITS: i32 = 13; static PASS1_BITS: i32 = 2; static FIX_0_298631336: i32 = 2446; static FIX_0_390180644: i32 = 3196; static FIX_0_541196100: i32 = 4433; static FIX_0_765366865: i32 = 6270; static FIX_0_899976223: i32 = 7373; static FIX_1_175875602: i32 = 9633; static FIX_1_501321110: i32 = 12_299; static FIX_1_847759065: i32 = 15_137; static FIX_1_961570560: i32 = 16_069; static FIX_2_053119869: i32 = 16_819; static FIX_2_562915447: i32 = 20_995; static FIX_3_072711026: i32 = 25_172; pub(crate) fn fdct(samples: &[u8; 64], coeffs: &mut [i32; 64]) { // Pass 1: process rows. // Results are scaled by sqrt(8) compared to a true DCT // furthermore we scale the results by 2**PASS1_BITS for y in 0usize..8 { let y0 = y * 8; // Even part let t0 = i32::from(samples[y0]) + i32::from(samples[y0 + 7]); let t1 = i32::from(samples[y0 + 1]) + i32::from(samples[y0 + 6]); let t2 = i32::from(samples[y0 + 2]) + i32::from(samples[y0 + 5]); let t3 = i32::from(samples[y0 + 3]) + i32::from(samples[y0 + 4]); let t10 = t0 + t3; let t12 = t0 - t3; let t11 = t1 + t2; let t13 = t1 - t2; let t0 = i32::from(samples[y0]) - i32::from(samples[y0 + 7]); let t1 = i32::from(samples[y0 + 1]) - i32::from(samples[y0 + 6]); let t2 = i32::from(samples[y0 + 2]) - i32::from(samples[y0 + 5]); let t3 = i32::from(samples[y0 + 3]) - i32::from(samples[y0 + 4]); // Apply unsigned -> signed conversion coeffs[y0] = (t10 + t11 - 8 * 128) << PASS1_BITS as usize; coeffs[y0 + 4] = (t10 - t11) << PASS1_BITS as usize; let mut z1 = (t12 + t13) * FIX_0_541196100; // Add fudge factor here for final descale z1 += 1 << (CONST_BITS - PASS1_BITS - 1) as usize; coeffs[y0 + 2] = (z1 + t12 * FIX_0_765366865) >> (CONST_BITS - PASS1_BITS) as usize; coeffs[y0 + 6] = (z1 - t13 * FIX_1_847759065) >> (CONST_BITS - PASS1_BITS) as usize; // Odd part let t12 = t0 + t2; let t13 = t1 + t3; let mut z1 = (t12 + t13) * FIX_1_175875602; // Add fudge factor here for final descale z1 += 1 << (CONST_BITS - PASS1_BITS - 1) as usize; let mut t12 = t12 * (-FIX_0_390180644); let mut t13 = t13 * (-FIX_1_961570560); t12 += z1; t13 += z1; let z1 = (t0 + t3) * (-FIX_0_899976223); let mut t0 = t0 * FIX_1_501321110; let mut t3 = t3 * FIX_0_298631336; t0 += z1 + t12; t3 += z1 + t13; let z1 = (t1 + t2) * (-FIX_2_562915447); let mut t1 = t1 * FIX_3_072711026; let mut t2 = t2 * FIX_2_053119869; t1 += z1 + t13; t2 += z1 + t12; coeffs[y0 + 1] = t0 >> (CONST_BITS - PASS1_BITS) as usize; coeffs[y0 + 3] = t1 >> (CONST_BITS - PASS1_BITS) as usize; coeffs[y0 + 5] = t2 >> (CONST_BITS - PASS1_BITS) as usize; coeffs[y0 + 7] = t3 >> (CONST_BITS - PASS1_BITS) as usize; } // Pass 2: process columns // We remove the PASS1_BITS scaling but leave the results scaled up an // overall factor of 8 for x in (0usize..8).rev() { // Even part let t0 = coeffs[x] + coeffs[x + 8 * 7]; let t1 = coeffs[x + 8] + coeffs[x + 8 * 6]; let t2 = coeffs[x + 8 * 2] + coeffs[x + 8 * 5]; let t3 = coeffs[x + 8 * 3] + coeffs[x + 8 * 4]; // Add fudge factor here for final descale let t10 = t0 + t3 + (1 << (PASS1_BITS - 1) as usize); let t12 = t0 - t3; let t11 = t1 + t2; let t13 = t1 - t2; let t0 = coeffs[x] - coeffs[x + 8 * 7]; let t1 = coeffs[x + 8] - coeffs[x + 8 * 6]; let t2 = coeffs[x + 8 * 2] - coeffs[x + 8 * 5]; let t3 = coeffs[x + 8 * 3] - coeffs[x + 8 * 4]; coeffs[x] = (t10 + t11) >> PASS1_BITS as usize; coeffs[x + 8 * 4] = (t10 - t11) >> PASS1_BITS as usize; let mut z1 = (t12 + t13) * FIX_0_541196100; // Add fudge factor here for final descale z1 += 1 << (CONST_BITS + PASS1_BITS - 1) as usize; coeffs[x + 8 * 2] = (z1 + t12 * FIX_0_765366865) >> (CONST_BITS + PASS1_BITS) as usize; coeffs[x + 8 * 6] = (z1 - t13 * FIX_1_847759065) >> (CONST_BITS + PASS1_BITS) as usize; // Odd part let t12 = t0 + t2; let t13 = t1 + t3; let mut z1 = (t12 + t13) * FIX_1_175875602; // Add fudge factor here for final descale z1 += 1 << (CONST_BITS - PASS1_BITS - 1) as usize; let mut t12 = t12 * (-FIX_0_390180644); let mut t13 = t13 * (-FIX_1_961570560); t12 += z1; t13 += z1; let z1 = (t0 + t3) * (-FIX_0_899976223); let mut t0 = t0 * FIX_1_501321110; let mut t3 = t3 * FIX_0_298631336; t0 += z1 + t12; t3 += z1 + t13; let z1 = (t1 + t2) * (-FIX_2_562915447); let mut t1 = t1 * FIX_3_072711026; let mut t2 = t2 * FIX_2_053119869; t1 += z1 + t13; t2 += z1 + t12; coeffs[x + 8] = t0 >> (CONST_BITS + PASS1_BITS) as usize; coeffs[x + 8 * 3] = t1 >> (CONST_BITS + PASS1_BITS) as usize; coeffs[x + 8 * 5] = t2 >> (CONST_BITS + PASS1_BITS) as usize; coeffs[x + 8 * 7] = t3 >> (CONST_BITS + PASS1_BITS) as usize; } } image-0.25.5/src/codecs/openexr.rs000064400000000000000000000525401046102023000150620ustar 00000000000000//! Decoding of OpenEXR (.exr) Images //! //! OpenEXR is an image format that is widely used, especially in VFX, //! because it supports lossless and lossy compression for float data. //! //! This decoder only supports RGB and RGBA images. //! If an image does not contain alpha information, //! it is defaulted to `1.0` (no transparency). //! //! # Related Links //! * - The OpenEXR reference. //! //! //! Current limitations (July 2021): //! - only pixel type `Rgba32F` and `Rgba16F` are supported //! - only non-deep rgb/rgba files supported, no conversion from/to YCbCr or similar //! - only the first non-deep rgb layer is used //! - only the largest mip map level is used //! - pixels outside display window are lost //! - meta data is lost //! - dwaa/dwab compressed images not supported yet by the exr library //! - (chroma) subsampling not supported yet by the exr library use exr::prelude::*; use crate::error::{DecodingError, EncodingError, ImageFormatHint}; use crate::{ ColorType, ExtendedColorType, ImageDecoder, ImageEncoder, ImageError, ImageFormat, ImageResult, }; use std::io::{BufRead, Seek, Write}; /// An OpenEXR decoder. Immediately reads the meta data from the file. #[derive(Debug)] pub struct OpenExrDecoder { exr_reader: exr::block::reader::Reader, // select a header that is rgb and not deep header_index: usize, // decode either rgb or rgba. // can be specified to include or discard alpha channels. // if none, the alpha channel will only be allocated where the file contains data for it. alpha_preference: Option, alpha_present_in_file: bool, } impl OpenExrDecoder { /// Create a decoder. Consumes the first few bytes of the source to extract image dimensions. /// Assumes the reader is buffered. In most cases, /// you should wrap your reader in a `BufReader` for best performance. /// Loads an alpha channel if the file has alpha samples. /// Use `with_alpha_preference` if you want to load or not load alpha unconditionally. pub fn new(source: R) -> ImageResult { Self::with_alpha_preference(source, None) } /// Create a decoder. Consumes the first few bytes of the source to extract image dimensions. /// Assumes the reader is buffered. In most cases, /// you should wrap your reader in a `BufReader` for best performance. /// If alpha preference is specified, an alpha channel will /// always be present or always be not present in the returned image. /// If alpha preference is none, the alpha channel will only be returned if it is found in the file. pub fn with_alpha_preference(source: R, alpha_preference: Option) -> ImageResult { // read meta data, then wait for further instructions, keeping the file open and ready let exr_reader = exr::block::read(source, false).map_err(to_image_err)?; let header_index = exr_reader .headers() .iter() .position(|header| { // check if r/g/b exists in the channels let has_rgb = ["R", "G", "B"] .iter() .all(|&required| // alpha will be optional header.channels.find_index_of_channel(&Text::from(required)).is_some()); // we currently dont support deep images, or images with other color spaces than rgb !header.deep && has_rgb }) .ok_or_else(|| { ImageError::Decoding(DecodingError::new( ImageFormatHint::Exact(ImageFormat::OpenExr), "image does not contain non-deep rgb channels", )) })?; let has_alpha = exr_reader.headers()[header_index] .channels .find_index_of_channel(&Text::from("A")) .is_some(); Ok(Self { alpha_preference, exr_reader, header_index, alpha_present_in_file: has_alpha, }) } // does not leak exrs-specific meta data into public api, just does it for this module fn selected_exr_header(&self) -> &exr::meta::header::Header { &self.exr_reader.meta_data().headers[self.header_index] } } impl ImageDecoder for OpenExrDecoder { fn dimensions(&self) -> (u32, u32) { let size = self .selected_exr_header() .shared_attributes .display_window .size; (size.width() as u32, size.height() as u32) } fn color_type(&self) -> ColorType { let returns_alpha = self.alpha_preference.unwrap_or(self.alpha_present_in_file); if returns_alpha { ColorType::Rgba32F } else { ColorType::Rgb32F } } fn original_color_type(&self) -> ExtendedColorType { if self.alpha_present_in_file { ExtendedColorType::Rgba32F } else { ExtendedColorType::Rgb32F } } // reads with or without alpha, depending on `self.alpha_preference` and `self.alpha_present_in_file` fn read_image(self, unaligned_bytes: &mut [u8]) -> ImageResult<()> { let _blocks_in_header = self.selected_exr_header().chunk_count as u64; let channel_count = self.color_type().channel_count() as usize; let display_window = self.selected_exr_header().shared_attributes.display_window; let data_window_offset = self.selected_exr_header().own_attributes.layer_position - display_window.position; { // check whether the buffer is large enough for the dimensions of the file let (width, height) = self.dimensions(); let bytes_per_pixel = self.color_type().bytes_per_pixel() as usize; let expected_byte_count = (width as usize) .checked_mul(height as usize) .and_then(|size| size.checked_mul(bytes_per_pixel)); // if the width and height does not match the length of the bytes, the arguments are invalid let has_invalid_size_or_overflowed = expected_byte_count .map(|expected_byte_count| unaligned_bytes.len() != expected_byte_count) // otherwise, size calculation overflowed, is bigger than memory, // therefore data is too small, so it is invalid. .unwrap_or(true); assert!( !has_invalid_size_or_overflowed, "byte buffer not large enough for the specified dimensions and f32 pixels" ); } let result = read() .no_deep_data() .largest_resolution_level() .rgba_channels( move |_size, _channels| vec![0_f32; display_window.size.area() * channel_count], move |buffer, index_in_data_window, (r, g, b, a_or_1): (f32, f32, f32, f32)| { let index_in_display_window = index_in_data_window.to_i32() + data_window_offset; // only keep pixels inside the data window // TODO filter chunks based on this if index_in_display_window.x() >= 0 && index_in_display_window.y() >= 0 && index_in_display_window.x() < display_window.size.width() as i32 && index_in_display_window.y() < display_window.size.height() as i32 { let index_in_display_window = index_in_display_window.to_usize("index bug").unwrap(); let first_f32_index = index_in_display_window.flat_index_for_size(display_window.size); buffer[first_f32_index * channel_count ..(first_f32_index + 1) * channel_count] .copy_from_slice(&[r, g, b, a_or_1][0..channel_count]); // TODO white point chromaticities + srgb/linear conversion? } }, ) .first_valid_layer() // TODO select exact layer by self.header_index? .all_attributes() .from_chunks(self.exr_reader) .map_err(to_image_err)?; // TODO this copy is strictly not necessary, but the exr api is a little too simple for reading into a borrowed target slice // this cast is safe and works with any alignment, as bytes are copied, and not f32 values. // note: buffer slice length is checked in the beginning of this function and will be correct at this point unaligned_bytes.copy_from_slice(bytemuck::cast_slice( result.layer_data.channel_data.pixels.as_slice(), )); Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } /// Write a raw byte buffer of pixels, /// returning an Error if it has an invalid length. /// /// Assumes the writer is buffered. In most cases, /// you should wrap your writer in a `BufWriter` for best performance. // private. access via `OpenExrEncoder` fn write_buffer( mut buffered_write: impl Write + Seek, unaligned_bytes: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let width = width as usize; let height = height as usize; let bytes_per_pixel = color_type.bits_per_pixel() as usize / 8; match color_type { ExtendedColorType::Rgb32F => { Image // TODO compression method zip?? ::from_channels( (width, height), SpecificChannels::rgb(|pixel: Vec2| { let pixel_index = pixel.flat_index_for_size(Vec2(width, height)); let start_byte = pixel_index * bytes_per_pixel; let [r, g, b]: [f32; 3] = bytemuck::pod_read_unaligned( &unaligned_bytes[start_byte..start_byte + bytes_per_pixel], ); (r, g, b) }), ) .write() // .on_progress(|progress| todo!()) .to_buffered(&mut buffered_write) .map_err(to_image_err)?; } ExtendedColorType::Rgba32F => { Image // TODO compression method zip?? ::from_channels( (width, height), SpecificChannels::rgba(|pixel: Vec2| { let pixel_index = pixel.flat_index_for_size(Vec2(width, height)); let start_byte = pixel_index * bytes_per_pixel; let [r, g, b, a]: [f32; 4] = bytemuck::pod_read_unaligned( &unaligned_bytes[start_byte..start_byte + bytes_per_pixel], ); (r, g, b, a) }), ) .write() // .on_progress(|progress| todo!()) .to_buffered(&mut buffered_write) .map_err(to_image_err)?; } // TODO other color types and channel types unsupported_color_type => { return Err(ImageError::Encoding(EncodingError::new( ImageFormatHint::Exact(ImageFormat::OpenExr), format!("writing color type {unsupported_color_type:?} not yet supported"), ))) } } Ok(()) } // TODO is this struct and trait actually used anywhere? /// A thin wrapper that implements `ImageEncoder` for OpenEXR images. Will behave like `image::codecs::openexr::write_buffer`. #[derive(Debug)] pub struct OpenExrEncoder(W); impl OpenExrEncoder { /// Create an `ImageEncoder`. Does not write anything yet. Writing later will behave like `image::codecs::openexr::write_buffer`. // use constructor, not public field, for future backwards-compatibility pub fn new(write: W) -> Self { Self(write) } } impl ImageEncoder for OpenExrEncoder where W: Write + Seek, { /// Writes the complete image. /// /// Assumes the writer is buffered. In most cases, you should wrap your writer in a `BufWriter` /// for best performance. #[track_caller] fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); write_buffer(self.0, buf, width, height, color_type) } } fn to_image_err(exr_error: Error) -> ImageError { ImageError::Decoding(DecodingError::new( ImageFormatHint::Exact(ImageFormat::OpenExr), exr_error.to_string(), )) } #[cfg(test)] mod test { use super::*; use std::fs::File; use std::io::{BufReader, Cursor}; use std::path::{Path, PathBuf}; use crate::buffer_::{Rgb32FImage, Rgba32FImage}; use crate::error::{LimitError, LimitErrorKind}; use crate::{DynamicImage, ImageBuffer, Rgb, Rgba}; const BASE_PATH: &[&str] = &[".", "tests", "images", "exr"]; /// Write an `Rgb32FImage`. /// Assumes the writer is buffered. In most cases, /// you should wrap your writer in a `BufWriter` for best performance. fn write_rgb_image(write: impl Write + Seek, image: &Rgb32FImage) -> ImageResult<()> { write_buffer( write, bytemuck::cast_slice(image.as_raw().as_slice()), image.width(), image.height(), ExtendedColorType::Rgb32F, ) } /// Write an `Rgba32FImage`. /// Assumes the writer is buffered. In most cases, /// you should wrap your writer in a `BufWriter` for best performance. fn write_rgba_image(write: impl Write + Seek, image: &Rgba32FImage) -> ImageResult<()> { write_buffer( write, bytemuck::cast_slice(image.as_raw().as_slice()), image.width(), image.height(), ExtendedColorType::Rgba32F, ) } /// Read the file from the specified path into an `Rgba32FImage`. fn read_as_rgba_image_from_file(path: impl AsRef) -> ImageResult { read_as_rgba_image(BufReader::new(File::open(path)?)) } /// Read the file from the specified path into an `Rgb32FImage`. fn read_as_rgb_image_from_file(path: impl AsRef) -> ImageResult { read_as_rgb_image(BufReader::new(File::open(path)?)) } /// Read the file from the specified path into an `Rgb32FImage`. fn read_as_rgb_image(read: impl BufRead + Seek) -> ImageResult { let decoder = OpenExrDecoder::with_alpha_preference(read, Some(false))?; let (width, height) = decoder.dimensions(); let buffer: Vec = crate::image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(width, height, buffer) // this should be the only reason for the "from raw" call to fail, // even though such a large allocation would probably cause an error much earlier .ok_or_else(|| { ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory)) }) } /// Read the file from the specified path into an `Rgba32FImage`. fn read_as_rgba_image(read: impl BufRead + Seek) -> ImageResult { let decoder = OpenExrDecoder::with_alpha_preference(read, Some(true))?; let (width, height) = decoder.dimensions(); let buffer: Vec = crate::image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(width, height, buffer) // this should be the only reason for the "from raw" call to fail, // even though such a large allocation would probably cause an error much earlier .ok_or_else(|| { ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory)) }) } #[test] fn compare_exr_hdr() { if cfg!(not(feature = "hdr")) { eprintln!("warning: to run all the openexr tests, activate the hdr feature flag"); } #[cfg(feature = "hdr")] { use crate::codecs::hdr::HdrDecoder; let folder = BASE_PATH.iter().collect::(); let reference_path = folder.clone().join("overexposed gradient.hdr"); let exr_path = folder .clone() .join("overexposed gradient - data window equals display window.exr"); let hdr_decoder = HdrDecoder::new(BufReader::new(File::open(reference_path).unwrap())).unwrap(); let hdr: Rgb32FImage = match DynamicImage::from_decoder(hdr_decoder).unwrap() { DynamicImage::ImageRgb32F(image) => image, _ => panic!("expected rgb32f image"), }; let exr_pixels: Rgb32FImage = read_as_rgb_image_from_file(exr_path).unwrap(); assert_eq!(exr_pixels.dimensions(), hdr.dimensions()); for (expected, found) in hdr.pixels().zip(exr_pixels.pixels()) { for (expected, found) in expected.0.iter().zip(found.0.iter()) { // the large tolerance seems to be caused by // the RGBE u8x4 pixel quantization of the hdr image format assert!( (expected - found).abs() < 0.1, "expected {}, found {}", expected, found ); } } } } #[test] fn roundtrip_rgba() { let mut next_random = vec![1.0, 0.0, -1.0, -3.15, 27.0, 11.0, 31.0] .into_iter() .cycle(); let mut next_random = move || next_random.next().unwrap(); let generated_image: Rgba32FImage = ImageBuffer::from_fn(9, 31, |_x, _y| { Rgba([next_random(), next_random(), next_random(), next_random()]) }); let mut bytes = vec![]; write_rgba_image(Cursor::new(&mut bytes), &generated_image).unwrap(); let decoded_image = read_as_rgba_image(Cursor::new(bytes)).unwrap(); debug_assert_eq!(generated_image, decoded_image); } #[test] fn roundtrip_rgb() { let mut next_random = vec![1.0, 0.0, -1.0, -3.15, 27.0, 11.0, 31.0] .into_iter() .cycle(); let mut next_random = move || next_random.next().unwrap(); let generated_image: Rgb32FImage = ImageBuffer::from_fn(9, 31, |_x, _y| { Rgb([next_random(), next_random(), next_random()]) }); let mut bytes = vec![]; write_rgb_image(Cursor::new(&mut bytes), &generated_image).unwrap(); let decoded_image = read_as_rgb_image(Cursor::new(bytes)).unwrap(); debug_assert_eq!(generated_image, decoded_image); } #[test] fn compare_rgba_rgb() { let exr_path = BASE_PATH .iter() .collect::() .join("overexposed gradient - data window equals display window.exr"); let rgb: Rgb32FImage = read_as_rgb_image_from_file(&exr_path).unwrap(); let rgba: Rgba32FImage = read_as_rgba_image_from_file(&exr_path).unwrap(); assert_eq!(rgba.dimensions(), rgb.dimensions()); for (Rgb(rgb), Rgba(rgba)) in rgb.pixels().zip(rgba.pixels()) { assert_eq!(rgb, &rgba[..3]); } } #[test] fn compare_cropped() { // like in photoshop, exr images may have layers placed anywhere in a canvas. // we don't want to load the pixels from the layer, but we want to load the pixels from the canvas. // a layer might be smaller than the canvas, in that case the canvas should be transparent black // where no layer was covering it. a layer might also be larger than the canvas, // these pixels should be discarded. // // in this test we want to make sure that an // auto-cropped image will be reproduced to the original. let exr_path = BASE_PATH.iter().collect::(); let original = exr_path.clone().join("cropping - uncropped original.exr"); let cropped = exr_path .clone() .join("cropping - data window differs display window.exr"); // smoke-check that the exr files are actually not the same { let original_exr = read_first_flat_layer_from_file(&original).unwrap(); let cropped_exr = read_first_flat_layer_from_file(&cropped).unwrap(); assert_eq!( original_exr.attributes.display_window, cropped_exr.attributes.display_window ); assert_ne!( original_exr.layer_data.attributes.layer_position, cropped_exr.layer_data.attributes.layer_position ); assert_ne!(original_exr.layer_data.size, cropped_exr.layer_data.size); } // check that they result in the same image let original: Rgba32FImage = read_as_rgba_image_from_file(&original).unwrap(); let cropped: Rgba32FImage = read_as_rgba_image_from_file(&cropped).unwrap(); assert_eq!(original.dimensions(), cropped.dimensions()); // the following is not a simple assert_eq, as in case of an error, // the whole image would be printed to the console, which takes forever assert!(original.pixels().zip(cropped.pixels()).all(|(a, b)| a == b)); } } image-0.25.5/src/codecs/pcx.rs000064400000000000000000000115671046102023000142000ustar 00000000000000//! Decoding and Encoding of PCX Images //! //! PCX (PiCture eXchange) Format is an obsolete image format from the 1980s. //! //! # Related Links //! * - The PCX format on Wikipedia extern crate pcx; use std::io::{self, BufRead, Cursor, Read, Seek}; use std::iter; use std::marker::PhantomData; use std::mem; use crate::color::{ColorType, ExtendedColorType}; use crate::error::{ImageError, ImageResult}; use crate::image::ImageDecoder; /// Decoder for PCX images. pub struct PCXDecoder where R: BufRead + Seek, { dimensions: (u32, u32), inner: pcx::Reader, } impl PCXDecoder where R: BufRead + Seek, { /// Create a new `PCXDecoder`. pub fn new(r: R) -> Result, ImageError> { let inner = pcx::Reader::new(r).map_err(ImageError::from_pcx_decode)?; let dimensions = (u32::from(inner.width()), u32::from(inner.height())); Ok(PCXDecoder { dimensions, inner }) } } impl ImageError { fn from_pcx_decode(err: io::Error) -> ImageError { ImageError::IoError(err) } } /// Wrapper struct around a `Cursor>` #[allow(dead_code)] #[deprecated] pub struct PCXReader(Cursor>, PhantomData); #[allow(deprecated)] impl Read for PCXReader { fn read(&mut self, buf: &mut [u8]) -> io::Result { self.0.read(buf) } fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { if self.0.position() == 0 && buf.is_empty() { mem::swap(buf, self.0.get_mut()); Ok(buf.len()) } else { self.0.read_to_end(buf) } } } impl ImageDecoder for PCXDecoder { fn dimensions(&self) -> (u32, u32) { self.dimensions } fn color_type(&self) -> ColorType { ColorType::Rgb8 } fn original_color_type(&self) -> ExtendedColorType { if self.inner.is_paletted() { return ExtendedColorType::Unknown(self.inner.header.bit_depth); } match ( self.inner.header.number_of_color_planes, self.inner.header.bit_depth, ) { (1, 1) => ExtendedColorType::L1, (1, 2) => ExtendedColorType::L2, (1, 4) => ExtendedColorType::L4, (1, 8) => ExtendedColorType::L8, (3, 1) => ExtendedColorType::Rgb1, (3, 2) => ExtendedColorType::Rgb2, (3, 4) => ExtendedColorType::Rgb4, (3, 8) => ExtendedColorType::Rgb8, (4, 1) => ExtendedColorType::Rgba1, (4, 2) => ExtendedColorType::Rgba2, (4, 4) => ExtendedColorType::Rgba4, (4, 8) => ExtendedColorType::Rgba8, (_, _) => unreachable!(), } } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); let height = self.inner.height() as usize; let width = self.inner.width() as usize; match self.inner.palette_length() { // No palette to interpret, so we can just write directly to buf None => { for i in 0..height { let offset = i * 3 * width; self.inner .next_row_rgb(&mut buf[offset..offset + (width * 3)]) .map_err(ImageError::from_pcx_decode)?; } } // We need to convert from the palette colours to RGB values inline, // but the pcx crate can't give us the palette first. Work around it // by taking the paletted image into a buffer, then converting it to // RGB8 after. Some(palette_length) => { let mut pal_buf: Vec = iter::repeat(0).take(height * width).collect(); for i in 0..height { let offset = i * width; self.inner .next_row_paletted(&mut pal_buf[offset..offset + width]) .map_err(ImageError::from_pcx_decode)?; } let mut palette: Vec = iter::repeat(0).take(3 * palette_length as usize).collect(); self.inner .read_palette(&mut palette[..]) .map_err(ImageError::from_pcx_decode)?; for i in 0..height { for j in 0..width { let pixel = pal_buf[i * width + j] as usize; let offset = i * width * 3 + j * 3; buf[offset] = palette[pixel * 3]; buf[offset + 1] = palette[pixel * 3 + 1]; buf[offset + 2] = palette[pixel * 3 + 2]; } } } } Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } image-0.25.5/src/codecs/png.rs000064400000000000000000000724651046102023000141760ustar 00000000000000//! Decoding and Encoding of PNG Images //! //! PNG (Portable Network Graphics) is an image format that supports lossless compression. //! //! # Related Links //! * - The PNG Specification //! use std::fmt; use std::io::{BufRead, Seek, Write}; use png::{BlendOp, DisposeOp}; use crate::animation::{Delay, Frame, Frames, Ratio}; use crate::color::{Blend, ColorType, ExtendedColorType}; use crate::error::{ DecodingError, EncodingError, ImageError, ImageResult, LimitError, LimitErrorKind, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{AnimationDecoder, ImageDecoder, ImageEncoder, ImageFormat}; use crate::{DynamicImage, GenericImage, ImageBuffer, Luma, LumaA, Rgb, Rgba, RgbaImage}; use crate::{GenericImageView, Limits}; // http://www.w3.org/TR/PNG-Structure.html // The first eight bytes of a PNG file always contain the following (decimal) values: pub(crate) const PNG_SIGNATURE: [u8; 8] = [137, 80, 78, 71, 13, 10, 26, 10]; /// PNG decoder pub struct PngDecoder { color_type: ColorType, reader: png::Reader, limits: Limits, } impl PngDecoder { /// Creates a new decoder that decodes from the stream ```r``` pub fn new(r: R) -> ImageResult> { Self::with_limits(r, Limits::no_limits()) } /// Creates a new decoder that decodes from the stream ```r``` with the given limits. pub fn with_limits(r: R, limits: Limits) -> ImageResult> { limits.check_support(&crate::LimitSupport::default())?; let max_bytes = usize::try_from(limits.max_alloc.unwrap_or(u64::MAX)).unwrap_or(usize::MAX); let mut decoder = png::Decoder::new_with_limits(r, png::Limits { bytes: max_bytes }); decoder.set_ignore_text_chunk(true); let info = decoder.read_header_info().map_err(ImageError::from_png)?; limits.check_dimensions(info.width, info.height)?; // By default the PNG decoder will scale 16 bpc to 8 bpc, so custom // transformations must be set. EXPAND preserves the default behavior // expanding bpc < 8 to 8 bpc. decoder.set_transformations(png::Transformations::EXPAND); let reader = decoder.read_info().map_err(ImageError::from_png)?; let (color_type, bits) = reader.output_color_type(); let color_type = match (color_type, bits) { (png::ColorType::Grayscale, png::BitDepth::Eight) => ColorType::L8, (png::ColorType::Grayscale, png::BitDepth::Sixteen) => ColorType::L16, (png::ColorType::GrayscaleAlpha, png::BitDepth::Eight) => ColorType::La8, (png::ColorType::GrayscaleAlpha, png::BitDepth::Sixteen) => ColorType::La16, (png::ColorType::Rgb, png::BitDepth::Eight) => ColorType::Rgb8, (png::ColorType::Rgb, png::BitDepth::Sixteen) => ColorType::Rgb16, (png::ColorType::Rgba, png::BitDepth::Eight) => ColorType::Rgba8, (png::ColorType::Rgba, png::BitDepth::Sixteen) => ColorType::Rgba16, (png::ColorType::Grayscale, png::BitDepth::One) => { return Err(unsupported_color(ExtendedColorType::L1)) } (png::ColorType::GrayscaleAlpha, png::BitDepth::One) => { return Err(unsupported_color(ExtendedColorType::La1)) } (png::ColorType::Rgb, png::BitDepth::One) => { return Err(unsupported_color(ExtendedColorType::Rgb1)) } (png::ColorType::Rgba, png::BitDepth::One) => { return Err(unsupported_color(ExtendedColorType::Rgba1)) } (png::ColorType::Grayscale, png::BitDepth::Two) => { return Err(unsupported_color(ExtendedColorType::L2)) } (png::ColorType::GrayscaleAlpha, png::BitDepth::Two) => { return Err(unsupported_color(ExtendedColorType::La2)) } (png::ColorType::Rgb, png::BitDepth::Two) => { return Err(unsupported_color(ExtendedColorType::Rgb2)) } (png::ColorType::Rgba, png::BitDepth::Two) => { return Err(unsupported_color(ExtendedColorType::Rgba2)) } (png::ColorType::Grayscale, png::BitDepth::Four) => { return Err(unsupported_color(ExtendedColorType::L4)) } (png::ColorType::GrayscaleAlpha, png::BitDepth::Four) => { return Err(unsupported_color(ExtendedColorType::La4)) } (png::ColorType::Rgb, png::BitDepth::Four) => { return Err(unsupported_color(ExtendedColorType::Rgb4)) } (png::ColorType::Rgba, png::BitDepth::Four) => { return Err(unsupported_color(ExtendedColorType::Rgba4)) } (png::ColorType::Indexed, bits) => { return Err(unsupported_color(ExtendedColorType::Unknown(bits as u8))) } }; Ok(PngDecoder { color_type, reader, limits, }) } /// Returns the gamma value of the image or None if no gamma value is indicated. /// /// If an sRGB chunk is present this method returns a gamma value of 0.45455 and ignores the /// value in the gAMA chunk. This is the recommended behavior according to the PNG standard: /// /// > When the sRGB chunk is present, [...] decoders that recognize the sRGB chunk but are not /// > capable of colour management are recommended to ignore the gAMA and cHRM chunks, and use /// > the values given above as if they had appeared in gAMA and cHRM chunks. pub fn gamma_value(&self) -> ImageResult> { Ok(self .reader .info() .source_gamma .map(|x| f64::from(x.into_scaled()) / 100_000.0)) } /// Turn this into an iterator over the animation frames. /// /// Reading the complete animation requires more memory than reading the data from the IDAT /// frame–multiple frame buffers need to be reserved at the same time. We further do not /// support compositing 16-bit colors. In any case this would be lossy as the interface of /// animation decoders does not support 16-bit colors. /// /// If something is not supported or a limit is violated then the decoding step that requires /// them will fail and an error will be returned instead of the frame. No further frames will /// be returned. pub fn apng(self) -> ImageResult> { Ok(ApngDecoder::new(self)) } /// Returns if the image contains an animation. /// /// Note that the file itself decides if the default image is considered to be part of the /// animation. When it is not the common interpretation is to use it as a thumbnail. /// /// If a non-animated image is converted into an `ApngDecoder` then its iterator is empty. pub fn is_apng(&self) -> ImageResult { Ok(self.reader.info().animation_control.is_some()) } } fn unsupported_color(ect: ExtendedColorType) -> ImageError { ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormat::Png.into(), UnsupportedErrorKind::Color(ect), )) } impl ImageDecoder for PngDecoder { fn dimensions(&self) -> (u32, u32) { self.reader.info().size() } fn color_type(&self) -> ColorType { self.color_type } fn icc_profile(&mut self) -> ImageResult>> { Ok(self.reader.info().icc_profile.as_ref().map(|x| x.to_vec())) } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { use byteorder_lite::{BigEndian, ByteOrder, NativeEndian}; assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); self.reader.next_frame(buf).map_err(ImageError::from_png)?; // PNG images are big endian. For 16 bit per channel and larger types, // the buffer may need to be reordered to native endianness per the // contract of `read_image`. // TODO: assumes equal channel bit depth. let bpc = self.color_type().bytes_per_pixel() / self.color_type().channel_count(); match bpc { 1 => (), // No reodering necessary for u8 2 => buf.chunks_exact_mut(2).for_each(|c| { let v = BigEndian::read_u16(c); NativeEndian::write_u16(c, v); }), _ => unreachable!(), } Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } fn set_limits(&mut self, limits: Limits) -> ImageResult<()> { limits.check_support(&crate::LimitSupport::default())?; let info = self.reader.info(); limits.check_dimensions(info.width, info.height)?; self.limits = limits; // TODO: add `png::Reader::change_limits()` and call it here // to also constrain the internal buffer allocations in the PNG crate Ok(()) } } /// An [`AnimationDecoder`] adapter of [`PngDecoder`]. /// /// See [`PngDecoder::apng`] for more information. /// /// [`AnimationDecoder`]: ../trait.AnimationDecoder.html /// [`PngDecoder`]: struct.PngDecoder.html /// [`PngDecoder::apng`]: struct.PngDecoder.html#method.apng pub struct ApngDecoder { inner: PngDecoder, /// The current output buffer. current: Option, /// The previous output buffer, used for dispose op previous. previous: Option, /// The dispose op of the current frame. dispose: DisposeOp, /// The region to dispose of the previous frame. dispose_region: Option<(u32, u32, u32, u32)>, /// The number of image still expected to be able to load. remaining: u32, /// The next (first) image is the thumbnail. has_thumbnail: bool, } impl ApngDecoder { fn new(inner: PngDecoder) -> Self { let info = inner.reader.info(); let remaining = match info.animation_control() { // The expected number of fcTL in the remaining image. Some(actl) => actl.num_frames, None => 0, }; // If the IDAT has no fcTL then it is not part of the animation counted by // num_frames. All following fdAT chunks must be preceded by an fcTL let has_thumbnail = info.frame_control.is_none(); ApngDecoder { inner, current: None, previous: None, dispose: DisposeOp::Background, dispose_region: None, remaining, has_thumbnail, } } // TODO: thumbnail(&mut self) -> Option> /// Decode one subframe and overlay it on the canvas. fn mix_next_frame(&mut self) -> Result, ImageError> { // The iterator always produces RGBA8 images const COLOR_TYPE: ColorType = ColorType::Rgba8; // Allocate the buffers, honoring the memory limits let (width, height) = self.inner.dimensions(); { let limits = &mut self.inner.limits; if self.previous.is_none() { limits.reserve_buffer(width, height, COLOR_TYPE)?; self.previous = Some(RgbaImage::new(width, height)); } if self.current.is_none() { limits.reserve_buffer(width, height, COLOR_TYPE)?; self.current = Some(RgbaImage::new(width, height)); } } // Remove this image from remaining. self.remaining = match self.remaining.checked_sub(1) { None => return Ok(None), Some(next) => next, }; // Shorten ourselves to 0 in case of error. let remaining = self.remaining; self.remaining = 0; // Skip the thumbnail that is not part of the animation. if self.has_thumbnail { // Clone the limits so that our one-off allocation that's destroyed after this scope doesn't persist let mut limits = self.inner.limits.clone(); limits.reserve_usize(self.inner.reader.output_buffer_size())?; let mut buffer = vec![0; self.inner.reader.output_buffer_size()]; // TODO: add `png::Reader::change_limits()` and call it here // to also constrain the internal buffer allocations in the PNG crate self.inner .reader .next_frame(&mut buffer) .map_err(ImageError::from_png)?; self.has_thumbnail = false; } self.animatable_color_type()?; // We've initialized them earlier in this function let previous = self.previous.as_mut().unwrap(); let current = self.current.as_mut().unwrap(); // Dispose of the previous frame. match self.dispose { DisposeOp::None => { previous.clone_from(current); } DisposeOp::Background => { previous.clone_from(current); if let Some((px, py, width, height)) = self.dispose_region { let mut region_current = current.sub_image(px, py, width, height); // FIXME: This is a workaround for the fact that `pixels_mut` is not implemented let pixels: Vec<_> = region_current.pixels().collect(); for (x, y, _) in &pixels { region_current.put_pixel(*x, *y, Rgba::from([0, 0, 0, 0])); } } else { // The first frame is always a background frame. current.pixels_mut().for_each(|pixel| { *pixel = Rgba::from([0, 0, 0, 0]); }); } } DisposeOp::Previous => { let (px, py, width, height) = self .dispose_region .expect("The first frame must not set dispose=Previous"); let region_previous = previous.sub_image(px, py, width, height); current .copy_from(®ion_previous.to_image(), px, py) .unwrap(); } } // The allocations from now on are not going to persist, // and will be destroyed at the end of the scope. // Clone the limits so that any changes to them die with the allocations. let mut limits = self.inner.limits.clone(); // Read next frame data. let raw_frame_size = self.inner.reader.output_buffer_size(); limits.reserve_usize(raw_frame_size)?; let mut buffer = vec![0; raw_frame_size]; // TODO: add `png::Reader::change_limits()` and call it here // to also constrain the internal buffer allocations in the PNG crate self.inner .reader .next_frame(&mut buffer) .map_err(ImageError::from_png)?; let info = self.inner.reader.info(); // Find out how to interpret the decoded frame. let (width, height, px, py, blend); match info.frame_control() { None => { width = info.width; height = info.height; px = 0; py = 0; blend = BlendOp::Source; } Some(fc) => { width = fc.width; height = fc.height; px = fc.x_offset; py = fc.y_offset; blend = fc.blend_op; self.dispose = fc.dispose_op; } }; self.dispose_region = Some((px, py, width, height)); // Turn the data into an rgba image proper. limits.reserve_buffer(width, height, COLOR_TYPE)?; let source = match self.inner.color_type { ColorType::L8 => { let image = ImageBuffer::, _>::from_raw(width, height, buffer).unwrap(); DynamicImage::ImageLuma8(image).into_rgba8() } ColorType::La8 => { let image = ImageBuffer::, _>::from_raw(width, height, buffer).unwrap(); DynamicImage::ImageLumaA8(image).into_rgba8() } ColorType::Rgb8 => { let image = ImageBuffer::, _>::from_raw(width, height, buffer).unwrap(); DynamicImage::ImageRgb8(image).into_rgba8() } ColorType::Rgba8 => ImageBuffer::, _>::from_raw(width, height, buffer).unwrap(), ColorType::L16 | ColorType::Rgb16 | ColorType::La16 | ColorType::Rgba16 => { // TODO: to enable remove restriction in `animatable_color_type` method. unreachable!("16-bit apng not yet support") } _ => unreachable!("Invalid png color"), }; // We've converted the raw frame to RGBA8 and disposed of the original allocation limits.free_usize(raw_frame_size); match blend { BlendOp::Source => { current .copy_from(&source, px, py) .expect("Invalid png image not detected in png"); } BlendOp::Over => { // TODO: investigate speed, speed-ups, and bounds-checks. for (x, y, p) in source.enumerate_pixels() { current.get_pixel_mut(x + px, y + py).blend(p); } } } // Ok, we can proceed with actually remaining images. self.remaining = remaining; // Return composited output buffer. Ok(Some(self.current.as_ref().unwrap())) } fn animatable_color_type(&self) -> Result<(), ImageError> { match self.inner.color_type { ColorType::L8 | ColorType::Rgb8 | ColorType::La8 | ColorType::Rgba8 => Ok(()), // TODO: do not handle multi-byte colors. Remember to implement it in `mix_next_frame`. ColorType::L16 | ColorType::Rgb16 | ColorType::La16 | ColorType::Rgba16 => { Err(unsupported_color(self.inner.color_type.into())) } _ => unreachable!("{:?} not a valid png color", self.inner.color_type), } } } impl<'a, R: BufRead + Seek + 'a> AnimationDecoder<'a> for ApngDecoder { fn into_frames(self) -> Frames<'a> { struct FrameIterator(ApngDecoder); impl Iterator for FrameIterator { type Item = ImageResult; fn next(&mut self) -> Option { let image = match self.0.mix_next_frame() { Ok(Some(image)) => image.clone(), Ok(None) => return None, Err(err) => return Some(Err(err)), }; let info = self.0.inner.reader.info(); let fc = info.frame_control().unwrap(); // PNG delays are rations in seconds. let num = u32::from(fc.delay_num) * 1_000u32; let denom = match fc.delay_den { // The standard dictates to replace by 100 when the denominator is 0. 0 => 100, d => u32::from(d), }; let delay = Delay::from_ratio(Ratio::new(num, denom)); Some(Ok(Frame::from_parts(image, 0, 0, delay))) } } Frames::new(Box::new(FrameIterator(self))) } } /// PNG encoder pub struct PngEncoder { w: W, compression: CompressionType, filter: FilterType, } /// Compression level of a PNG encoder. The default setting is `Fast`. #[derive(Clone, Copy, Debug, Eq, PartialEq)] #[non_exhaustive] #[derive(Default)] pub enum CompressionType { /// Default compression level Default, /// Fast, minimal compression #[default] Fast, /// High compression level Best, } /// Filter algorithms used to process image data to improve compression. /// /// The default filter is `Adaptive`. #[derive(Clone, Copy, Debug, Eq, PartialEq)] #[non_exhaustive] #[derive(Default)] pub enum FilterType { /// No processing done, best used for low bit depth grayscale or data with a /// low color count NoFilter, /// Filters based on previous pixel in the same scanline Sub, /// Filters based on the scanline above Up, /// Filters based on the average of left and right neighbor pixels Avg, /// Algorithm that takes into account the left, upper left, and above pixels Paeth, /// Uses a heuristic to select one of the preceding filters for each /// scanline rather than one filter for the entire image #[default] Adaptive, } #[derive(Clone, Copy, Debug, Eq, PartialEq)] #[non_exhaustive] enum BadPngRepresentation { ColorType(ExtendedColorType), } impl PngEncoder { /// Create a new encoder that writes its output to ```w``` pub fn new(w: W) -> PngEncoder { PngEncoder { w, compression: CompressionType::default(), filter: FilterType::default(), } } /// Create a new encoder that writes its output to `w` with `CompressionType` `compression` and /// `FilterType` `filter`. /// /// It is best to view the options as a _hint_ to the implementation on the smallest or fastest /// option for encoding a particular image. That is, using options that map directly to a PNG /// image parameter will use this parameter where possible. But variants that have no direct /// mapping may be interpreted differently in minor versions. The exact output is expressly /// __not__ part of the SemVer stability guarantee. /// /// Note that it is not optimal to use a single filter type, so an adaptive /// filter type is selected as the default. The filter which best minimizes /// file size may change with the type of compression used. pub fn new_with_quality( w: W, compression: CompressionType, filter: FilterType, ) -> PngEncoder { PngEncoder { w, compression, filter, } } fn encode_inner( self, data: &[u8], width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> { let (ct, bits) = match color { ExtendedColorType::L8 => (png::ColorType::Grayscale, png::BitDepth::Eight), ExtendedColorType::L16 => (png::ColorType::Grayscale, png::BitDepth::Sixteen), ExtendedColorType::La8 => (png::ColorType::GrayscaleAlpha, png::BitDepth::Eight), ExtendedColorType::La16 => (png::ColorType::GrayscaleAlpha, png::BitDepth::Sixteen), ExtendedColorType::Rgb8 => (png::ColorType::Rgb, png::BitDepth::Eight), ExtendedColorType::Rgb16 => (png::ColorType::Rgb, png::BitDepth::Sixteen), ExtendedColorType::Rgba8 => (png::ColorType::Rgba, png::BitDepth::Eight), ExtendedColorType::Rgba16 => (png::ColorType::Rgba, png::BitDepth::Sixteen), _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Png.into(), UnsupportedErrorKind::Color(color), ), )) } }; let comp = match self.compression { CompressionType::Default => png::Compression::Default, CompressionType::Best => png::Compression::Best, _ => png::Compression::Fast, }; let (filter, adaptive_filter) = match self.filter { FilterType::NoFilter => ( png::FilterType::NoFilter, png::AdaptiveFilterType::NonAdaptive, ), FilterType::Sub => (png::FilterType::Sub, png::AdaptiveFilterType::NonAdaptive), FilterType::Up => (png::FilterType::Up, png::AdaptiveFilterType::NonAdaptive), FilterType::Avg => (png::FilterType::Avg, png::AdaptiveFilterType::NonAdaptive), FilterType::Paeth => (png::FilterType::Paeth, png::AdaptiveFilterType::NonAdaptive), FilterType::Adaptive => (png::FilterType::Sub, png::AdaptiveFilterType::Adaptive), }; let mut encoder = png::Encoder::new(self.w, width, height); encoder.set_color(ct); encoder.set_depth(bits); encoder.set_compression(comp); encoder.set_filter(filter); encoder.set_adaptive_filter(adaptive_filter); let mut writer = encoder .write_header() .map_err(|e| ImageError::IoError(e.into()))?; writer .write_image_data(data) .map_err(|e| ImageError::IoError(e.into())) } } impl ImageEncoder for PngEncoder { /// Write a PNG image with the specified width, height, and color type. /// /// For color types with 16-bit per channel or larger, the contents of `buf` should be in /// native endian. `PngEncoder` will automatically convert to big endian as required by the /// underlying PNG format. #[track_caller] fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { use byteorder_lite::{BigEndian, ByteOrder, NativeEndian}; use ExtendedColorType::*; let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); // PNG images are big endian. For 16 bit per channel and larger types, // the buffer may need to be reordered to big endian per the // contract of `write_image`. // TODO: assumes equal channel bit depth. match color_type { L8 | La8 | Rgb8 | Rgba8 => { // No reodering necessary for u8 self.encode_inner(buf, width, height, color_type) } L16 | La16 | Rgb16 | Rgba16 => { // Because the buffer is immutable and the PNG encoder does not // yet take Write/Read traits, create a temporary buffer for // big endian reordering. let mut reordered = vec![0; buf.len()]; buf.chunks_exact(2) .zip(reordered.chunks_exact_mut(2)) .for_each(|(b, r)| BigEndian::write_u16(r, NativeEndian::read_u16(b))); self.encode_inner(&reordered, width, height, color_type) } _ => Err(ImageError::Encoding(EncodingError::new( ImageFormat::Png.into(), BadPngRepresentation::ColorType(color_type), ))), } } } impl ImageError { fn from_png(err: png::DecodingError) -> ImageError { use png::DecodingError::*; match err { IoError(err) => ImageError::IoError(err), // The input image was not a valid PNG. err @ Format(_) => { ImageError::Decoding(DecodingError::new(ImageFormat::Png.into(), err)) } // Other is used when: // - The decoder is polled for more animation frames despite being done (or not being animated // in the first place). // - The output buffer does not have the required size. err @ Parameter(_) => ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic(err.to_string()), )), LimitsExceeded => { ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory)) } } } } impl fmt::Display for BadPngRepresentation { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { Self::ColorType(color_type) => { write!(f, "The color {color_type:?} can not be represented in PNG.") } } } } impl std::error::Error for BadPngRepresentation {} #[cfg(test)] mod tests { use super::*; use std::io::{BufReader, Cursor, Read}; #[test] fn ensure_no_decoder_off_by_one() { let dec = PngDecoder::new(BufReader::new( std::fs::File::open("tests/images/png/bugfixes/debug_triangle_corners_widescreen.png") .unwrap(), )) .expect("Unable to read PNG file (does it exist?)"); assert_eq![(2000, 1000), dec.dimensions()]; assert_eq![ ColorType::Rgb8, dec.color_type(), "Image MUST have the Rgb8 format" ]; let correct_bytes = crate::image::decoder_to_vec(dec) .expect("Unable to read file") .bytes() .map(|x| x.expect("Unable to read byte")) .collect::>(); assert_eq![6_000_000, correct_bytes.len()]; } #[test] fn underlying_error() { use std::error::Error; let mut not_png = std::fs::read("tests/images/png/bugfixes/debug_triangle_corners_widescreen.png") .unwrap(); not_png[0] = 0; let error = PngDecoder::new(Cursor::new(¬_png)).err().unwrap(); let _ = error .source() .unwrap() .downcast_ref::() .expect("Caused by a png error"); } #[test] fn encode_bad_color_type() { // regression test for issue #1663 let image = DynamicImage::new_rgb32f(1, 1); let mut target = Cursor::new(vec![]); let _ = image.write_to(&mut target, ImageFormat::Png); } } image-0.25.5/src/codecs/pnm/autobreak.rs000064400000000000000000000070071046102023000161470ustar 00000000000000//! Insert line breaks between written buffers when they would overflow the line length. use std::io; // The pnm standard says to insert line breaks after 70 characters. Assumes that no line breaks // are actually written. We have to be careful to fully commit buffers or not commit them at all, // otherwise we might insert a newline in the middle of a token. pub(crate) struct AutoBreak { wrapped: W, line_capacity: usize, line: Vec, has_newline: bool, panicked: bool, // see https://github.com/rust-lang/rust/issues/30888 } impl AutoBreak { pub(crate) fn new(writer: W, line_capacity: usize) -> Self { AutoBreak { wrapped: writer, line_capacity, line: Vec::with_capacity(line_capacity + 1), has_newline: false, panicked: false, } } fn flush_buf(&mut self) -> io::Result<()> { // from BufWriter let mut written = 0; let len = self.line.len(); let mut ret = Ok(()); while written < len { self.panicked = true; let r = self.wrapped.write(&self.line[written..]); self.panicked = false; match r { Ok(0) => { ret = Err(io::Error::new( io::ErrorKind::WriteZero, "failed to write the buffered data", )); break; } Ok(n) => written += n, Err(ref e) if e.kind() == io::ErrorKind::Interrupted => {} Err(e) => { ret = Err(e); break; } } } if written > 0 { self.line.drain(..written); } ret } } impl io::Write for AutoBreak { fn write(&mut self, buffer: &[u8]) -> io::Result { if self.has_newline { self.flush()?; self.has_newline = false; } if !self.line.is_empty() && self.line.len() + buffer.len() > self.line_capacity { self.line.push(b'\n'); self.has_newline = true; self.flush()?; self.has_newline = false; } self.line.extend_from_slice(buffer); Ok(buffer.len()) } fn flush(&mut self) -> io::Result<()> { self.flush_buf()?; self.wrapped.flush() } } impl Drop for AutoBreak { fn drop(&mut self) { if !self.panicked { let _r = self.flush_buf(); // internal writer flushed automatically by Drop } } } #[cfg(test)] mod tests { use super::*; use std::io::Write; #[test] fn test_aligned_writes() { let mut output = Vec::new(); { let mut writer = AutoBreak::new(&mut output, 10); writer.write_all(b"0123456789").unwrap(); writer.write_all(b"0123456789").unwrap(); } assert_eq!(output.as_slice(), b"0123456789\n0123456789"); } #[test] fn test_greater_writes() { let mut output = Vec::new(); { let mut writer = AutoBreak::new(&mut output, 10); writer.write_all(b"012").unwrap(); writer.write_all(b"345").unwrap(); writer.write_all(b"0123456789").unwrap(); writer.write_all(b"012345678910").unwrap(); writer.write_all(b"_").unwrap(); } assert_eq!(output.as_slice(), b"012345\n0123456789\n012345678910\n_"); } } image-0.25.5/src/codecs/pnm/decoder.rs000064400000000000000000001334071046102023000156030ustar 00000000000000use std::error; use std::fmt::{self, Display}; use std::io::{self, Read}; use std::mem::size_of; use std::num::ParseIntError; use std::str::{self, FromStr}; use super::{ArbitraryHeader, ArbitraryTuplType, BitmapHeader, GraymapHeader, PixmapHeader}; use super::{HeaderRecord, PnmHeader, PnmSubtype, SampleEncoding}; use crate::color::{ColorType, ExtendedColorType}; use crate::error::{ DecodingError, ImageError, ImageResult, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageDecoder, ImageFormat}; use crate::utils; use byteorder_lite::{BigEndian, ByteOrder, NativeEndian}; /// All errors that can occur when attempting to parse a PNM #[derive(Debug, Clone)] enum DecoderError { /// PNM's "P[123456]" signature wrong or missing PnmMagicInvalid([u8; 2]), /// Couldn't parse the specified string as an integer from the specified source UnparsableValue(ErrorDataSource, String, ParseIntError), /// More than the exactly one allowed plane specified by the format NonAsciiByteInHeader(u8), /// The PAM header contained a non-ASCII byte NonAsciiLineInPamHeader, /// A sample string contained a non-ASCII byte NonAsciiSample, /// The byte after the P7 magic was not 0x0A NEWLINE NotNewlineAfterP7Magic(u8), /// The PNM header had too few lines UnexpectedPnmHeaderEnd, /// The specified line was specified twice HeaderLineDuplicated(PnmHeaderLine), /// The line with the specified ID was not understood HeaderLineUnknown(String), /// At least one of the required lines were missing from the header (are `None` here) /// /// Same names as [`PnmHeaderLine`](enum.PnmHeaderLine.html) #[allow(missing_docs)] HeaderLineMissing { height: Option, width: Option, depth: Option, maxval: Option, }, /// Not enough data was provided to the Decoder to decode the image InputTooShort, /// Sample raster contained unexpected byte UnexpectedByteInRaster(u8), /// Specified sample was out of bounds (e.g. >1 in B&W) SampleOutOfBounds(u8), /// The image's maxval is zero MaxvalZero, /// The image's maxval exceeds 0xFFFF MaxvalTooBig(u32), /// The specified tuple type supports restricted depths and maxvals, those restrictions were not met InvalidDepthOrMaxval { tuple_type: ArbitraryTuplType, depth: u32, maxval: u32, }, /// The specified tuple type supports restricted depths, those restrictions were not met InvalidDepth { tuple_type: ArbitraryTuplType, depth: u32, }, /// The tuple type was not recognised by the parser TupleTypeUnrecognised, /// Overflowed the specified value when parsing Overflow, } impl Display for DecoderError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { DecoderError::PnmMagicInvalid(magic) => f.write_fmt(format_args!( "Expected magic constant for PNM: P1..P7, got [{:#04X?}, {:#04X?}]", magic[0], magic[1] )), DecoderError::UnparsableValue(src, data, err) => { f.write_fmt(format_args!("Error parsing {data:?} as {src}: {err}")) } DecoderError::NonAsciiByteInHeader(c) => { f.write_fmt(format_args!("Non-ASCII character {c:#04X?} in header")) } DecoderError::NonAsciiLineInPamHeader => f.write_str("Non-ASCII line in PAM header"), DecoderError::NonAsciiSample => { f.write_str("Non-ASCII character where sample value was expected") } DecoderError::NotNewlineAfterP7Magic(c) => f.write_fmt(format_args!( "Expected newline after P7 magic, got {c:#04X?}" )), DecoderError::UnexpectedPnmHeaderEnd => f.write_str("Unexpected end of PNM header"), DecoderError::HeaderLineDuplicated(line) => { f.write_fmt(format_args!("Duplicate {line} line")) } DecoderError::HeaderLineUnknown(identifier) => f.write_fmt(format_args!( "Unknown header line with identifier {identifier:?}" )), DecoderError::HeaderLineMissing { height, width, depth, maxval, } => f.write_fmt(format_args!( "Missing header line: have height={height:?}, width={width:?}, depth={depth:?}, maxval={maxval:?}" )), DecoderError::InputTooShort => { f.write_str("Not enough data was provided to the Decoder to decode the image") } DecoderError::UnexpectedByteInRaster(c) => f.write_fmt(format_args!( "Unexpected character {c:#04X?} within sample raster" )), DecoderError::SampleOutOfBounds(val) => { f.write_fmt(format_args!("Sample value {val} outside of bounds")) } DecoderError::MaxvalZero => f.write_str("Image MAXVAL is zero"), DecoderError::MaxvalTooBig(maxval) => { f.write_fmt(format_args!("Image MAXVAL exceeds {}: {}", 0xFFFF, maxval)) } DecoderError::InvalidDepthOrMaxval { tuple_type, depth, maxval, } => f.write_fmt(format_args!( "Invalid depth ({}) or maxval ({}) for tuple type {}", depth, maxval, tuple_type.name() )), DecoderError::InvalidDepth { tuple_type, depth } => f.write_fmt(format_args!( "Invalid depth ({}) for tuple type {}", depth, tuple_type.name() )), DecoderError::TupleTypeUnrecognised => f.write_str("Tuple type not recognized"), DecoderError::Overflow => f.write_str("Overflow when parsing value"), } } } /// Note: should `pnm` be extracted into a separate crate, /// this will need to be hidden until that crate hits version `1.0`. impl From for ImageError { fn from(e: DecoderError) -> ImageError { ImageError::Decoding(DecodingError::new(ImageFormat::Pnm.into(), e)) } } impl error::Error for DecoderError { fn source(&self) -> Option<&(dyn error::Error + 'static)> { match self { DecoderError::UnparsableValue(_, _, err) => Some(err), _ => None, } } } /// Single-value lines in a PNM header #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum PnmHeaderLine { /// "HEIGHT" Height, /// "WIDTH" Width, /// "DEPTH" Depth, /// "MAXVAL", a.k.a. `maxwhite` Maxval, } impl Display for PnmHeaderLine { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(match self { PnmHeaderLine::Height => "HEIGHT", PnmHeaderLine::Width => "WIDTH", PnmHeaderLine::Depth => "DEPTH", PnmHeaderLine::Maxval => "MAXVAL", }) } } /// Single-value lines in a PNM header #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum ErrorDataSource { /// One of the header lines Line(PnmHeaderLine), /// Value in the preamble Preamble, /// Sample/pixel data Sample, } impl Display for ErrorDataSource { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { ErrorDataSource::Line(l) => l.fmt(f), ErrorDataSource::Preamble => f.write_str("number in preamble"), ErrorDataSource::Sample => f.write_str("sample"), } } } /// Dynamic representation, represents all decodable (sample, depth) combinations. #[derive(Clone, Copy)] enum TupleType { PbmBit, BWBit, GrayU8, GrayU16, RGBU8, RGBU16, } trait Sample { type Representation; /// Representation size in bytes fn sample_size() -> u32 { size_of::() as u32 } fn bytelen(width: u32, height: u32, samples: u32) -> ImageResult { Ok((width * height * samples * Self::sample_size()) as usize) } fn from_bytes(bytes: &[u8], row_size: usize, output_buf: &mut [u8]) -> ImageResult<()>; fn from_ascii(reader: &mut dyn Read, output_buf: &mut [u8]) -> ImageResult<()>; } struct U8; struct U16; struct PbmBit; struct BWBit; trait DecodableImageHeader { fn tuple_type(&self) -> ImageResult; } /// PNM decoder pub struct PnmDecoder { reader: R, header: PnmHeader, tuple: TupleType, } impl PnmDecoder { /// Create a new decoder that decodes from the stream ```read``` pub fn new(mut buffered_read: R) -> ImageResult> { let magic = buffered_read.read_magic_constant()?; let subtype = match magic { [b'P', b'1'] => PnmSubtype::Bitmap(SampleEncoding::Ascii), [b'P', b'2'] => PnmSubtype::Graymap(SampleEncoding::Ascii), [b'P', b'3'] => PnmSubtype::Pixmap(SampleEncoding::Ascii), [b'P', b'4'] => PnmSubtype::Bitmap(SampleEncoding::Binary), [b'P', b'5'] => PnmSubtype::Graymap(SampleEncoding::Binary), [b'P', b'6'] => PnmSubtype::Pixmap(SampleEncoding::Binary), [b'P', b'7'] => PnmSubtype::ArbitraryMap, _ => return Err(DecoderError::PnmMagicInvalid(magic).into()), }; let decoder = match subtype { PnmSubtype::Bitmap(enc) => PnmDecoder::read_bitmap_header(buffered_read, enc), PnmSubtype::Graymap(enc) => PnmDecoder::read_graymap_header(buffered_read, enc), PnmSubtype::Pixmap(enc) => PnmDecoder::read_pixmap_header(buffered_read, enc), PnmSubtype::ArbitraryMap => PnmDecoder::read_arbitrary_header(buffered_read), }?; if utils::check_dimension_overflow( decoder.dimensions().0, decoder.dimensions().1, decoder.color_type().bytes_per_pixel(), ) { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::GenericFeature(format!( "Image dimensions ({}x{}) are too large", decoder.dimensions().0, decoder.dimensions().1 )), ), )); } Ok(decoder) } /// Extract the reader and header after an image has been read. pub fn into_inner(self) -> (R, PnmHeader) { (self.reader, self.header) } fn read_bitmap_header(mut reader: R, encoding: SampleEncoding) -> ImageResult> { let header = reader.read_bitmap_header(encoding)?; Ok(PnmDecoder { reader, tuple: TupleType::PbmBit, header: PnmHeader { decoded: HeaderRecord::Bitmap(header), encoded: None, }, }) } fn read_graymap_header(mut reader: R, encoding: SampleEncoding) -> ImageResult> { let header = reader.read_graymap_header(encoding)?; let tuple_type = header.tuple_type()?; Ok(PnmDecoder { reader, tuple: tuple_type, header: PnmHeader { decoded: HeaderRecord::Graymap(header), encoded: None, }, }) } fn read_pixmap_header(mut reader: R, encoding: SampleEncoding) -> ImageResult> { let header = reader.read_pixmap_header(encoding)?; let tuple_type = header.tuple_type()?; Ok(PnmDecoder { reader, tuple: tuple_type, header: PnmHeader { decoded: HeaderRecord::Pixmap(header), encoded: None, }, }) } fn read_arbitrary_header(mut reader: R) -> ImageResult> { let header = reader.read_arbitrary_header()?; let tuple_type = header.tuple_type()?; Ok(PnmDecoder { reader, tuple: tuple_type, header: PnmHeader { decoded: HeaderRecord::Arbitrary(header), encoded: None, }, }) } } trait HeaderReader: Read { /// Reads the two magic constant bytes fn read_magic_constant(&mut self) -> ImageResult<[u8; 2]> { let mut magic: [u8; 2] = [0, 0]; self.read_exact(&mut magic)?; Ok(magic) } /// Reads a string as well as a single whitespace after it, ignoring comments fn read_next_string(&mut self) -> ImageResult { let mut bytes = Vec::new(); // pair input bytes with a bool mask to remove comments let mark_comments = self.bytes().scan(true, |partof, read| { let byte = match read { Err(err) => return Some((*partof, Err(err))), Ok(byte) => byte, }; let cur_enabled = *partof && byte != b'#'; let next_enabled = cur_enabled || (byte == b'\r' || byte == b'\n'); *partof = next_enabled; Some((cur_enabled, Ok(byte))) }); for (_, byte) in mark_comments.filter(|e| e.0) { match byte { Ok(b'\t' | b'\n' | b'\x0b' | b'\x0c' | b'\r' | b' ') => { if !bytes.is_empty() { break; // We're done as we already have some content } } Ok(byte) if !byte.is_ascii() => { return Err(DecoderError::NonAsciiByteInHeader(byte).into()) } Ok(byte) => { bytes.push(byte); } Err(_) => break, } } if bytes.is_empty() { return Err(ImageError::IoError(io::ErrorKind::UnexpectedEof.into())); } if !bytes.as_slice().is_ascii() { // We have only filled the buffer with characters for which `byte.is_ascii()` holds. unreachable!("Non-ASCII character should have returned sooner") } let string = String::from_utf8(bytes) // We checked the precondition ourselves a few lines before, `bytes.as_slice().is_ascii()`. .unwrap_or_else(|_| unreachable!("Only ASCII characters should be decoded")); Ok(string) } fn read_next_line(&mut self) -> ImageResult { let mut buffer = Vec::new(); loop { let mut byte = [0]; if self.read(&mut byte)? == 0 || byte[0] == b'\n' { break; } buffer.push(byte[0]); } String::from_utf8(buffer) .map_err(|e| ImageError::Decoding(DecodingError::new(ImageFormat::Pnm.into(), e))) } fn read_next_u32(&mut self) -> ImageResult { let s = self.read_next_string()?; s.parse::() .map_err(|err| DecoderError::UnparsableValue(ErrorDataSource::Preamble, s, err).into()) } fn read_bitmap_header(&mut self, encoding: SampleEncoding) -> ImageResult { let width = self.read_next_u32()?; let height = self.read_next_u32()?; Ok(BitmapHeader { encoding, height, width, }) } fn read_graymap_header(&mut self, encoding: SampleEncoding) -> ImageResult { self.read_pixmap_header(encoding).map( |PixmapHeader { encoding, width, height, maxval, }| GraymapHeader { encoding, width, height, maxwhite: maxval, }, ) } fn read_pixmap_header(&mut self, encoding: SampleEncoding) -> ImageResult { let width = self.read_next_u32()?; let height = self.read_next_u32()?; let maxval = self.read_next_u32()?; Ok(PixmapHeader { encoding, height, width, maxval, }) } fn read_arbitrary_header(&mut self) -> ImageResult { fn parse_single_value_line( line_val: &mut Option, rest: &str, line: PnmHeaderLine, ) -> ImageResult<()> { if line_val.is_some() { Err(DecoderError::HeaderLineDuplicated(line).into()) } else { let v = rest.trim().parse().map_err(|err| { DecoderError::UnparsableValue(ErrorDataSource::Line(line), rest.to_owned(), err) })?; *line_val = Some(v); Ok(()) } } match self.bytes().next() { None => return Err(ImageError::IoError(io::ErrorKind::UnexpectedEof.into())), Some(Err(io)) => return Err(ImageError::IoError(io)), Some(Ok(b'\n')) => (), Some(Ok(c)) => return Err(DecoderError::NotNewlineAfterP7Magic(c).into()), } let mut line; let mut height: Option = None; let mut width: Option = None; let mut depth: Option = None; let mut maxval: Option = None; let mut tupltype: Option = None; loop { line = self.read_next_line()?; if line.is_empty() { return Err(DecoderError::UnexpectedPnmHeaderEnd.into()); } if line.as_bytes()[0] == b'#' { continue; } if !line.is_ascii() { return Err(DecoderError::NonAsciiLineInPamHeader.into()); } #[allow(deprecated)] let (identifier, rest) = line .trim_left() .split_at(line.find(char::is_whitespace).unwrap_or(line.len())); match identifier { "ENDHDR" => break, "HEIGHT" => parse_single_value_line(&mut height, rest, PnmHeaderLine::Height)?, "WIDTH" => parse_single_value_line(&mut width, rest, PnmHeaderLine::Width)?, "DEPTH" => parse_single_value_line(&mut depth, rest, PnmHeaderLine::Depth)?, "MAXVAL" => parse_single_value_line(&mut maxval, rest, PnmHeaderLine::Maxval)?, "TUPLTYPE" => { let identifier = rest.trim(); if tupltype.is_some() { let appended = tupltype.take().map(|mut v| { v.push(' '); v.push_str(identifier); v }); tupltype = appended; } else { tupltype = Some(identifier.to_string()); } } _ => return Err(DecoderError::HeaderLineUnknown(identifier.to_string()).into()), } } let (Some(h), Some(w), Some(d), Some(m)) = (height, width, depth, maxval) else { return Err(DecoderError::HeaderLineMissing { height, width, depth, maxval, } .into()); }; let tupltype = match tupltype { None => None, Some(ref t) if t == "BLACKANDWHITE" => Some(ArbitraryTuplType::BlackAndWhite), Some(ref t) if t == "BLACKANDWHITE_ALPHA" => { Some(ArbitraryTuplType::BlackAndWhiteAlpha) } Some(ref t) if t == "GRAYSCALE" => Some(ArbitraryTuplType::Grayscale), Some(ref t) if t == "GRAYSCALE_ALPHA" => Some(ArbitraryTuplType::GrayscaleAlpha), Some(ref t) if t == "RGB" => Some(ArbitraryTuplType::RGB), Some(ref t) if t == "RGB_ALPHA" => Some(ArbitraryTuplType::RGBAlpha), Some(other) => Some(ArbitraryTuplType::Custom(other)), }; Ok(ArbitraryHeader { height: h, width: w, depth: d, maxval: m, tupltype, }) } } impl HeaderReader for R where R: Read {} impl ImageDecoder for PnmDecoder { fn dimensions(&self) -> (u32, u32) { (self.header.width(), self.header.height()) } fn color_type(&self) -> ColorType { match self.tuple { TupleType::PbmBit => ColorType::L8, TupleType::BWBit => ColorType::L8, TupleType::GrayU8 => ColorType::L8, TupleType::GrayU16 => ColorType::L16, TupleType::RGBU8 => ColorType::Rgb8, TupleType::RGBU16 => ColorType::Rgb16, } } fn original_color_type(&self) -> ExtendedColorType { match self.tuple { TupleType::PbmBit => ExtendedColorType::L1, TupleType::BWBit => ExtendedColorType::L1, TupleType::GrayU8 => ExtendedColorType::L8, TupleType::GrayU16 => ExtendedColorType::L16, TupleType::RGBU8 => ExtendedColorType::Rgb8, TupleType::RGBU16 => ExtendedColorType::Rgb16, } } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); match self.tuple { TupleType::PbmBit => self.read_samples::(1, buf), TupleType::BWBit => self.read_samples::(1, buf), TupleType::RGBU8 => self.read_samples::(3, buf), TupleType::RGBU16 => self.read_samples::(3, buf), TupleType::GrayU8 => self.read_samples::(1, buf), TupleType::GrayU16 => self.read_samples::(1, buf), } } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } impl PnmDecoder { fn read_samples(&mut self, components: u32, buf: &mut [u8]) -> ImageResult<()> { match self.subtype().sample_encoding() { SampleEncoding::Binary => { let width = self.header.width(); let height = self.header.height(); let bytecount = S::bytelen(width, height, components)?; let mut bytes = vec![]; self.reader .by_ref() // This conversion is potentially lossy but unlikely and in that case we error // later anyways. .take(bytecount as u64) .read_to_end(&mut bytes)?; if bytes.len() != bytecount { return Err(DecoderError::InputTooShort.into()); } let width: usize = width.try_into().map_err(|_| DecoderError::Overflow)?; let components: usize = components.try_into().map_err(|_| DecoderError::Overflow)?; let row_size = width .checked_mul(components) .ok_or(DecoderError::Overflow)?; S::from_bytes(&bytes, row_size, buf)?; } SampleEncoding::Ascii => { self.read_ascii::(buf)?; } }; // Scale samples if 8bit or 16bit is not saturated let current_sample_max = self.header.maximal_sample(); let target_sample_max = 256_u32.pow(S::sample_size()) - 1; if current_sample_max != target_sample_max { let factor = target_sample_max as f32 / current_sample_max as f32; if S::sample_size() == 1 { for v in buf.iter_mut() { *v = (f32::from(*v) * factor).round() as u8; } } else if S::sample_size() == 2 { for chunk in buf.chunks_exact_mut(2) { let v = NativeEndian::read_u16(chunk); NativeEndian::write_u16(chunk, (f32::from(v) * factor).round() as u16); } } } Ok(()) } fn read_ascii(&mut self, output_buf: &mut [u8]) -> ImageResult<()> { Basic::from_ascii(&mut self.reader, output_buf) } /// Get the pnm subtype, depending on the magic constant contained in the header pub fn subtype(&self) -> PnmSubtype { self.header.subtype() } } fn read_separated_ascii>(reader: &mut dyn Read) -> ImageResult where T::Err: Display, { let is_separator = |v: &u8| matches! { *v, b'\t' | b'\n' | b'\x0b' | b'\x0c' | b'\r' | b' ' }; let token = reader .bytes() .skip_while(|v| v.as_ref().ok().map_or(false, is_separator)) .take_while(|v| v.as_ref().ok().map_or(false, |c| !is_separator(c))) .collect::, _>>()?; if !token.is_ascii() { return Err(DecoderError::NonAsciiSample.into()); } let string = str::from_utf8(&token) // We checked the precondition ourselves a few lines before with `token.is_ascii()`. .unwrap_or_else(|_| unreachable!("Only ASCII characters should be decoded")); string.parse().map_err(|err| { DecoderError::UnparsableValue(ErrorDataSource::Sample, string.to_owned(), err).into() }) } impl Sample for U8 { type Representation = u8; fn from_bytes(bytes: &[u8], _row_size: usize, output_buf: &mut [u8]) -> ImageResult<()> { output_buf.copy_from_slice(bytes); Ok(()) } fn from_ascii(reader: &mut dyn Read, output_buf: &mut [u8]) -> ImageResult<()> { for b in output_buf { *b = read_separated_ascii(reader)?; } Ok(()) } } impl Sample for U16 { type Representation = u16; fn from_bytes(bytes: &[u8], _row_size: usize, output_buf: &mut [u8]) -> ImageResult<()> { output_buf.copy_from_slice(bytes); for chunk in output_buf.chunks_exact_mut(2) { let v = BigEndian::read_u16(chunk); NativeEndian::write_u16(chunk, v); } Ok(()) } fn from_ascii(reader: &mut dyn Read, output_buf: &mut [u8]) -> ImageResult<()> { for chunk in output_buf.chunks_exact_mut(2) { let v = read_separated_ascii::(reader)?; NativeEndian::write_u16(chunk, v); } Ok(()) } } // The image is encoded in rows of bits, high order bits first. Any bits beyond the row bits should // be ignored. Also, contrary to rgb, black pixels are encoded as a 1 while white is 0. This will // need to be reversed for the grayscale output. impl Sample for PbmBit { type Representation = u8; fn bytelen(width: u32, height: u32, samples: u32) -> ImageResult { let count = width * samples; let linelen = (count / 8) + u32::from((count % 8) != 0); Ok((linelen * height) as usize) } fn from_bytes(bytes: &[u8], row_size: usize, output_buf: &mut [u8]) -> ImageResult<()> { let mut expanded = utils::expand_bits(1, row_size.try_into().unwrap(), bytes); for b in &mut expanded { *b = !*b; } output_buf.copy_from_slice(&expanded); Ok(()) } fn from_ascii(reader: &mut dyn Read, output_buf: &mut [u8]) -> ImageResult<()> { let mut bytes = reader.bytes(); for b in output_buf { loop { let byte = bytes .next() .ok_or_else::(|| DecoderError::InputTooShort.into())??; match byte { b'\t' | b'\n' | b'\x0b' | b'\x0c' | b'\r' | b' ' => continue, b'0' => *b = 255, b'1' => *b = 0, c => return Err(DecoderError::UnexpectedByteInRaster(c).into()), } break; } } Ok(()) } } // Encoded just like a normal U8 but we check the values. impl Sample for BWBit { type Representation = u8; fn from_bytes(bytes: &[u8], row_size: usize, output_buf: &mut [u8]) -> ImageResult<()> { U8::from_bytes(bytes, row_size, output_buf)?; if let Some(val) = output_buf.iter().find(|&val| *val > 1) { return Err(DecoderError::SampleOutOfBounds(*val).into()); } Ok(()) } fn from_ascii(_reader: &mut dyn Read, _output_buf: &mut [u8]) -> ImageResult<()> { unreachable!("BW bits from anymaps are never encoded as ASCII") } } impl DecodableImageHeader for BitmapHeader { fn tuple_type(&self) -> ImageResult { Ok(TupleType::PbmBit) } } impl DecodableImageHeader for GraymapHeader { fn tuple_type(&self) -> ImageResult { match self.maxwhite { 0 => Err(DecoderError::MaxvalZero.into()), v if v <= 0xFF => Ok(TupleType::GrayU8), v if v <= 0xFFFF => Ok(TupleType::GrayU16), _ => Err(DecoderError::MaxvalTooBig(self.maxwhite).into()), } } } impl DecodableImageHeader for PixmapHeader { fn tuple_type(&self) -> ImageResult { match self.maxval { 0 => Err(DecoderError::MaxvalZero.into()), v if v <= 0xFF => Ok(TupleType::RGBU8), v if v <= 0xFFFF => Ok(TupleType::RGBU16), _ => Err(DecoderError::MaxvalTooBig(self.maxval).into()), } } } impl DecodableImageHeader for ArbitraryHeader { fn tuple_type(&self) -> ImageResult { match self.tupltype { _ if self.maxval == 0 => Err(DecoderError::MaxvalZero.into()), None if self.depth == 1 => Ok(TupleType::GrayU8), None if self.depth == 2 => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::Color(ExtendedColorType::La8), ), )), None if self.depth == 3 => Ok(TupleType::RGBU8), None if self.depth == 4 => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::Color(ExtendedColorType::Rgba8), ), )), Some(ArbitraryTuplType::BlackAndWhite) if self.maxval == 1 && self.depth == 1 => { Ok(TupleType::BWBit) } Some(ArbitraryTuplType::BlackAndWhite) => Err(DecoderError::InvalidDepthOrMaxval { tuple_type: ArbitraryTuplType::BlackAndWhite, maxval: self.maxval, depth: self.depth, } .into()), Some(ArbitraryTuplType::Grayscale) if self.depth == 1 && self.maxval <= 0xFF => { Ok(TupleType::GrayU8) } Some(ArbitraryTuplType::Grayscale) if self.depth <= 1 && self.maxval <= 0xFFFF => { Ok(TupleType::GrayU16) } Some(ArbitraryTuplType::Grayscale) => Err(DecoderError::InvalidDepthOrMaxval { tuple_type: ArbitraryTuplType::Grayscale, maxval: self.maxval, depth: self.depth, } .into()), Some(ArbitraryTuplType::RGB) if self.depth == 3 && self.maxval <= 0xFF => { Ok(TupleType::RGBU8) } Some(ArbitraryTuplType::RGB) if self.depth == 3 && self.maxval <= 0xFFFF => { Ok(TupleType::RGBU16) } Some(ArbitraryTuplType::RGB) => Err(DecoderError::InvalidDepth { tuple_type: ArbitraryTuplType::RGB, depth: self.depth, } .into()), Some(ArbitraryTuplType::BlackAndWhiteAlpha) => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::GenericFeature(format!( "Color type {}", ArbitraryTuplType::BlackAndWhiteAlpha.name() )), ), )), Some(ArbitraryTuplType::GrayscaleAlpha) => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::Color(ExtendedColorType::La8), ), )), Some(ArbitraryTuplType::RGBAlpha) => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::Color(ExtendedColorType::Rgba8), ), )), Some(ArbitraryTuplType::Custom(ref custom)) => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::GenericFeature(format!("Tuple type {custom:?}")), ), )), None => Err(DecoderError::TupleTypeUnrecognised.into()), } } } #[cfg(test)] mod tests { use super::*; /// Tests reading of a valid blackandwhite pam #[test] fn pam_blackandwhite() { let pamdata = b"P7 WIDTH 4 HEIGHT 4 DEPTH 1 MAXVAL 1 TUPLTYPE BLACKANDWHITE # Comment line ENDHDR \x01\x00\x00\x01\x01\x00\x00\x01\x01\x00\x00\x01\x01\x00\x00\x01"; let decoder = PnmDecoder::new(&pamdata[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::L8); assert_eq!(decoder.original_color_type(), ExtendedColorType::L1); assert_eq!(decoder.dimensions(), (4, 4)); assert_eq!(decoder.subtype(), PnmSubtype::ArbitraryMap); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!( image, vec![ 0xFF, 0x00, 0x00, 0xFF, 0xFF, 0x00, 0x00, 0xFF, 0xFF, 0x00, 0x00, 0xFF, 0xFF, 0x00, 0x00, 0xFF ] ); match PnmDecoder::new(&pamdata[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Arbitrary(ArbitraryHeader { width: 4, height: 4, maxval: 1, depth: 1, tupltype: Some(ArbitraryTuplType::BlackAndWhite), }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } /// Tests reading of a valid grayscale pam #[test] fn pam_grayscale() { let pamdata = b"P7 WIDTH 4 HEIGHT 4 DEPTH 1 MAXVAL 255 TUPLTYPE GRAYSCALE # Comment line ENDHDR \xde\xad\xbe\xef\xde\xad\xbe\xef\xde\xad\xbe\xef\xde\xad\xbe\xef"; let decoder = PnmDecoder::new(&pamdata[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::L8); assert_eq!(decoder.dimensions(), (4, 4)); assert_eq!(decoder.subtype(), PnmSubtype::ArbitraryMap); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!( image, vec![ 0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef ] ); match PnmDecoder::new(&pamdata[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Arbitrary(ArbitraryHeader { width: 4, height: 4, depth: 1, maxval: 255, tupltype: Some(ArbitraryTuplType::Grayscale), }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } /// Tests reading of a valid rgb pam #[test] fn pam_rgb() { let pamdata = b"P7 # Comment line MAXVAL 255 TUPLTYPE RGB DEPTH 3 WIDTH 2 HEIGHT 2 ENDHDR \xde\xad\xbe\xef\xde\xad\xbe\xef\xde\xad\xbe\xef"; let decoder = PnmDecoder::new(&pamdata[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::Rgb8); assert_eq!(decoder.dimensions(), (2, 2)); assert_eq!(decoder.subtype(), PnmSubtype::ArbitraryMap); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!( image, vec![0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef] ); match PnmDecoder::new(&pamdata[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Arbitrary(ArbitraryHeader { maxval: 255, tupltype: Some(ArbitraryTuplType::RGB), depth: 3, width: 2, height: 2, }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } #[test] fn pbm_binary() { // The data contains two rows of the image (each line is padded to the full byte). For // comments on its format, see documentation of `impl SampleType for PbmBit`. let pbmbinary = [&b"P4 6 2\n"[..], &[0b0110_1100_u8, 0b1011_0111]].concat(); let decoder = PnmDecoder::new(&pbmbinary[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::L8); assert_eq!(decoder.original_color_type(), ExtendedColorType::L1); assert_eq!(decoder.dimensions(), (6, 2)); assert_eq!( decoder.subtype(), PnmSubtype::Bitmap(SampleEncoding::Binary) ); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!(image, vec![255, 0, 0, 255, 0, 0, 0, 255, 0, 0, 255, 0]); match PnmDecoder::new(&pbmbinary[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Bitmap(BitmapHeader { encoding: SampleEncoding::Binary, width: 6, height: 2, }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } /// A previous infinite loop. #[test] fn pbm_binary_ascii_termination() { use std::io::{BufReader, Cursor, Error, ErrorKind, Read, Result}; struct FailRead(Cursor<&'static [u8]>); impl Read for FailRead { fn read(&mut self, buf: &mut [u8]) -> Result { match self.0.read(buf) { Ok(n) if n > 0 => Ok(n), _ => Err(Error::new( ErrorKind::BrokenPipe, "Simulated broken pipe error", )), } } } let pbmbinary = BufReader::new(FailRead(Cursor::new(b"P1 1 1\n"))); let decoder = PnmDecoder::new(pbmbinary).unwrap(); let mut image = vec![0; decoder.total_bytes() as usize]; decoder .read_image(&mut image) .expect_err("Image is malformed"); } #[test] fn pbm_ascii() { // The data contains two rows of the image (each line is padded to the full byte). For // comments on its format, see documentation of `impl SampleType for PbmBit`. Tests all // whitespace characters that should be allowed (the 6 characters according to POSIX). let pbmbinary = b"P1 6 2\n 0 1 1 0 1 1\n1 0 1 1 0\t\n\x0b\x0c\r1"; let decoder = PnmDecoder::new(&pbmbinary[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::L8); assert_eq!(decoder.original_color_type(), ExtendedColorType::L1); assert_eq!(decoder.dimensions(), (6, 2)); assert_eq!(decoder.subtype(), PnmSubtype::Bitmap(SampleEncoding::Ascii)); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!(image, vec![255, 0, 0, 255, 0, 0, 0, 255, 0, 0, 255, 0]); match PnmDecoder::new(&pbmbinary[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Bitmap(BitmapHeader { encoding: SampleEncoding::Ascii, width: 6, height: 2, }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } #[test] fn pbm_ascii_nospace() { // The data contains two rows of the image (each line is padded to the full byte). Notably, // it is completely within specification for the ascii data not to contain separating // whitespace for the pbm format or any mix. let pbmbinary = b"P1 6 2\n011011101101"; let decoder = PnmDecoder::new(&pbmbinary[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::L8); assert_eq!(decoder.original_color_type(), ExtendedColorType::L1); assert_eq!(decoder.dimensions(), (6, 2)); assert_eq!(decoder.subtype(), PnmSubtype::Bitmap(SampleEncoding::Ascii)); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!(image, vec![255, 0, 0, 255, 0, 0, 0, 255, 0, 0, 255, 0]); match PnmDecoder::new(&pbmbinary[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Bitmap(BitmapHeader { encoding: SampleEncoding::Ascii, width: 6, height: 2, }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } #[test] fn pgm_binary() { // The data contains two rows of the image (each line is padded to the full byte). For // comments on its format, see documentation of `impl SampleType for PbmBit`. let elements = (0..16).collect::>(); let pbmbinary = [&b"P5 4 4 255\n"[..], &elements].concat(); let decoder = PnmDecoder::new(&pbmbinary[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::L8); assert_eq!(decoder.dimensions(), (4, 4)); assert_eq!( decoder.subtype(), PnmSubtype::Graymap(SampleEncoding::Binary) ); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!(image, elements); match PnmDecoder::new(&pbmbinary[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Graymap(GraymapHeader { encoding: SampleEncoding::Binary, width: 4, height: 4, maxwhite: 255, }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } #[test] fn pgm_ascii() { // The data contains two rows of the image (each line is padded to the full byte). For // comments on its format, see documentation of `impl SampleType for PbmBit`. let pbmbinary = b"P2 4 4 255\n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15"; let decoder = PnmDecoder::new(&pbmbinary[..]).unwrap(); assert_eq!(decoder.color_type(), ColorType::L8); assert_eq!(decoder.dimensions(), (4, 4)); assert_eq!( decoder.subtype(), PnmSubtype::Graymap(SampleEncoding::Ascii) ); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!(image, (0..16).collect::>()); match PnmDecoder::new(&pbmbinary[..]).unwrap().into_inner() { ( _, PnmHeader { decoded: HeaderRecord::Graymap(GraymapHeader { encoding: SampleEncoding::Ascii, width: 4, height: 4, maxwhite: 255, }), encoded: _, }, ) => (), _ => panic!("Decoded header is incorrect"), } } #[test] fn ppm_ascii() { let ascii = b"P3 1 1 2000\n0 1000 2000"; let decoder = PnmDecoder::new(&ascii[..]).unwrap(); let mut image = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut image).unwrap(); assert_eq!( image, [ 0_u16.to_ne_bytes(), (u16::MAX / 2 + 1).to_ne_bytes(), u16::MAX.to_ne_bytes() ] .into_iter() .flatten() .collect::>() ); } #[test] fn dimension_overflow() { let pamdata = b"P7 # Comment line MAXVAL 255 TUPLTYPE RGB DEPTH 3 WIDTH 4294967295 HEIGHT 4294967295 ENDHDR \xde\xad\xbe\xef\xde\xad\xbe\xef\xde\xad\xbe\xef"; assert!(PnmDecoder::new(&pamdata[..]).is_err()); } #[test] fn issue_1508() { let _ = crate::load_from_memory(b"P391919 16999 1 1 9 919 16999 1 9999 999* 99999 N"); } #[test] fn issue_1616_overflow() { let data = [ 80, 54, 10, 52, 50, 57, 52, 56, 50, 57, 52, 56, 35, 56, 10, 52, 10, 48, 10, 12, 12, 56, ]; // Validate: we have a header. Note: we might already calculate that this will fail but // then we could not return information about the header to the caller. let decoder = PnmDecoder::new(&data[..]).unwrap(); let mut image = vec![0; decoder.total_bytes() as usize]; let _ = decoder.read_image(&mut image); } } image-0.25.5/src/codecs/pnm/encoder.rs000064400000000000000000000554771046102023000156270ustar 00000000000000//! Encoding of PNM Images use std::fmt; use std::io; use std::io::Write; use super::AutoBreak; use super::{ArbitraryHeader, ArbitraryTuplType, BitmapHeader, GraymapHeader, PixmapHeader}; use super::{HeaderRecord, PnmHeader, PnmSubtype, SampleEncoding}; use crate::color::ExtendedColorType; use crate::error::{ ImageError, ImageResult, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageEncoder, ImageFormat}; use byteorder_lite::{BigEndian, WriteBytesExt}; enum HeaderStrategy { Dynamic, Subtype(PnmSubtype), Chosen(PnmHeader), } #[derive(Clone, Copy)] pub enum FlatSamples<'a> { U8(&'a [u8]), U16(&'a [u16]), } /// Encodes images to any of the `pnm` image formats. pub struct PnmEncoder { writer: W, header: HeaderStrategy, } /// Encapsulate the checking system in the type system. Non of the fields are actually accessed /// but requiring them forces us to validly construct the struct anyways. struct CheckedImageBuffer<'a> { _image: FlatSamples<'a>, _width: u32, _height: u32, _color: ExtendedColorType, } // Check the header against the buffer. Each struct produces the next after a check. struct UncheckedHeader<'a> { header: &'a PnmHeader, } struct CheckedDimensions<'a> { unchecked: UncheckedHeader<'a>, width: u32, height: u32, } struct CheckedHeaderColor<'a> { dimensions: CheckedDimensions<'a>, color: ExtendedColorType, } struct CheckedHeader<'a> { color: CheckedHeaderColor<'a>, encoding: TupleEncoding<'a>, _image: CheckedImageBuffer<'a>, } enum TupleEncoding<'a> { PbmBits { samples: FlatSamples<'a>, width: u32, }, Ascii { samples: FlatSamples<'a>, }, Bytes { samples: FlatSamples<'a>, }, } impl PnmEncoder { /// Create new `PnmEncoder` from the `writer`. /// /// The encoded images will have some `pnm` format. If more control over the image type is /// required, use either one of `with_subtype` or `with_header`. For more information on the /// behaviour, see `with_dynamic_header`. pub fn new(writer: W) -> Self { PnmEncoder { writer, header: HeaderStrategy::Dynamic, } } /// Encode a specific pnm subtype image. /// /// The magic number and encoding type will be chosen as provided while the rest of the header /// data will be generated dynamically. Trying to encode incompatible images (e.g. encoding an /// RGB image as Graymap) will result in an error. /// /// This will overwrite the effect of earlier calls to `with_header` and `with_dynamic_header`. pub fn with_subtype(self, subtype: PnmSubtype) -> Self { PnmEncoder { writer: self.writer, header: HeaderStrategy::Subtype(subtype), } } /// Enforce the use of a chosen header. /// /// While this option gives the most control over the actual written data, the encoding process /// will error in case the header data and image parameters do not agree. It is the users /// obligation to ensure that the width and height are set accordingly, for example. /// /// Choose this option if you want a lossless decoding/encoding round trip. /// /// This will overwrite the effect of earlier calls to `with_subtype` and `with_dynamic_header`. pub fn with_header(self, header: PnmHeader) -> Self { PnmEncoder { writer: self.writer, header: HeaderStrategy::Chosen(header), } } /// Create the header dynamically for each image. /// /// This is the default option upon creation of the encoder. With this, most images should be /// encodable but the specific format chosen is out of the users control. The pnm subtype is /// chosen arbitrarily by the library. /// /// This will overwrite the effect of earlier calls to `with_subtype` and `with_header`. pub fn with_dynamic_header(self) -> Self { PnmEncoder { writer: self.writer, header: HeaderStrategy::Dynamic, } } /// Encode an image whose samples are represented as `u8`. /// /// Some `pnm` subtypes are incompatible with some color options, a chosen header most /// certainly with any deviation from the original decoded image. pub fn encode<'s, S>( &mut self, image: S, width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> where S: Into>, { let image = image.into(); match self.header { HeaderStrategy::Dynamic => self.write_dynamic_header(image, width, height, color), HeaderStrategy::Subtype(subtype) => { self.write_subtyped_header(subtype, image, width, height, color) } HeaderStrategy::Chosen(ref header) => { Self::write_with_header(&mut self.writer, header, image, width, height, color) } } } /// Choose any valid pnm format that the image can be expressed in and write its header. /// /// Returns how the body should be written if successful. fn write_dynamic_header( &mut self, image: FlatSamples, width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> { let depth = u32::from(color.channel_count()); let (maxval, tupltype) = match color { ExtendedColorType::L1 => (1, ArbitraryTuplType::BlackAndWhite), ExtendedColorType::L8 => (0xff, ArbitraryTuplType::Grayscale), ExtendedColorType::L16 => (0xffff, ArbitraryTuplType::Grayscale), ExtendedColorType::La1 => (1, ArbitraryTuplType::BlackAndWhiteAlpha), ExtendedColorType::La8 => (0xff, ArbitraryTuplType::GrayscaleAlpha), ExtendedColorType::La16 => (0xffff, ArbitraryTuplType::GrayscaleAlpha), ExtendedColorType::Rgb8 => (0xff, ArbitraryTuplType::RGB), ExtendedColorType::Rgb16 => (0xffff, ArbitraryTuplType::RGB), ExtendedColorType::Rgba8 => (0xff, ArbitraryTuplType::RGBAlpha), ExtendedColorType::Rgba16 => (0xffff, ArbitraryTuplType::RGBAlpha), _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::Color(color), ), )) } }; let header = PnmHeader { decoded: HeaderRecord::Arbitrary(ArbitraryHeader { width, height, depth, maxval, tupltype: Some(tupltype), }), encoded: None, }; Self::write_with_header(&mut self.writer, &header, image, width, height, color) } /// Try to encode the image with the chosen format, give its corresponding pixel encoding type. fn write_subtyped_header( &mut self, subtype: PnmSubtype, image: FlatSamples, width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> { let header = match (subtype, color) { (PnmSubtype::ArbitraryMap, color) => { return self.write_dynamic_header(image, width, height, color) } (PnmSubtype::Pixmap(encoding), ExtendedColorType::Rgb8) => PnmHeader { decoded: HeaderRecord::Pixmap(PixmapHeader { encoding, width, height, maxval: 255, }), encoded: None, }, (PnmSubtype::Graymap(encoding), ExtendedColorType::L8) => PnmHeader { decoded: HeaderRecord::Graymap(GraymapHeader { encoding, width, height, maxwhite: 255, }), encoded: None, }, (PnmSubtype::Bitmap(encoding), ExtendedColorType::L8 | ExtendedColorType::L1) => { PnmHeader { decoded: HeaderRecord::Bitmap(BitmapHeader { encoding, height, width, }), encoded: None, } } (_, _) => { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic( "Color type can not be represented in the chosen format".to_owned(), ), ))); } }; Self::write_with_header(&mut self.writer, &header, image, width, height, color) } /// Try to encode the image with the chosen header, checking if values are correct. /// /// Returns how the body should be written if successful. fn write_with_header( writer: &mut dyn Write, header: &PnmHeader, image: FlatSamples, width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> { let unchecked = UncheckedHeader { header }; unchecked .check_header_dimensions(width, height)? .check_header_color(color)? .check_sample_values(image)? .write_header(writer)? .write_image(writer) } } impl ImageEncoder for PnmEncoder { #[track_caller] fn write_image( mut self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); self.encode(buf, width, height, color_type) } } impl<'a> CheckedImageBuffer<'a> { fn check( image: FlatSamples<'a>, width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult> { let components = color.channel_count() as usize; let uwidth = width as usize; let uheight = height as usize; let expected_len = components .checked_mul(uwidth) .and_then(|v| v.checked_mul(uheight)); if Some(image.len()) != expected_len { // Image buffer does not correspond to size and colour. return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } Ok(CheckedImageBuffer { _image: image, _width: width, _height: height, _color: color, }) } } impl<'a> UncheckedHeader<'a> { fn check_header_dimensions( self, width: u32, height: u32, ) -> ImageResult> { if self.header.width() != width || self.header.height() != height { // Chosen header does not match Image dimensions. return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } Ok(CheckedDimensions { unchecked: self, width, height, }) } } impl<'a> CheckedDimensions<'a> { // Check color compatibility with the header. This will only error when we are certain that // the combination is bogus (e.g. combining Pixmap and Palette) but allows uncertain // combinations (basically a ArbitraryTuplType::Custom with any color of fitting depth). fn check_header_color(self, color: ExtendedColorType) -> ImageResult> { let components = u32::from(color.channel_count()); match *self.unchecked.header { PnmHeader { decoded: HeaderRecord::Bitmap(_), .. } => match color { ExtendedColorType::L1 | ExtendedColorType::L8 | ExtendedColorType::L16 => (), _ => { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic( "PBM format only support luma color types".to_owned(), ), ))) } }, PnmHeader { decoded: HeaderRecord::Graymap(_), .. } => match color { ExtendedColorType::L1 | ExtendedColorType::L8 | ExtendedColorType::L16 => (), _ => { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic( "PGM format only support luma color types".to_owned(), ), ))) } }, PnmHeader { decoded: HeaderRecord::Pixmap(_), .. } => match color { ExtendedColorType::Rgb8 => (), _ => { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic( "PPM format only support ExtendedColorType::Rgb8".to_owned(), ), ))) } }, PnmHeader { decoded: HeaderRecord::Arbitrary(ArbitraryHeader { depth, ref tupltype, .. }), .. } => match (tupltype, color) { (&Some(ArbitraryTuplType::BlackAndWhite), ExtendedColorType::L1) => (), (&Some(ArbitraryTuplType::BlackAndWhiteAlpha), ExtendedColorType::La8) => (), (&Some(ArbitraryTuplType::Grayscale), ExtendedColorType::L1) => (), (&Some(ArbitraryTuplType::Grayscale), ExtendedColorType::L8) => (), (&Some(ArbitraryTuplType::Grayscale), ExtendedColorType::L16) => (), (&Some(ArbitraryTuplType::GrayscaleAlpha), ExtendedColorType::La8) => (), (&Some(ArbitraryTuplType::RGB), ExtendedColorType::Rgb8) => (), (&Some(ArbitraryTuplType::RGBAlpha), ExtendedColorType::Rgba8) => (), (&None, _) if depth == components => (), (&Some(ArbitraryTuplType::Custom(_)), _) if depth == components => (), _ if depth != components => { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic(format!( "Depth mismatch: header {depth} vs. color {components}" )), ))) } _ => { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::Generic( "Invalid color type for selected PAM color type".to_owned(), ), ))) } }, } Ok(CheckedHeaderColor { dimensions: self, color, }) } } impl<'a> CheckedHeaderColor<'a> { fn check_sample_values(self, image: FlatSamples<'a>) -> ImageResult> { let header_maxval = match self.dimensions.unchecked.header.decoded { HeaderRecord::Bitmap(_) => 1, HeaderRecord::Graymap(GraymapHeader { maxwhite, .. }) => maxwhite, HeaderRecord::Pixmap(PixmapHeader { maxval, .. }) => maxval, HeaderRecord::Arbitrary(ArbitraryHeader { maxval, .. }) => maxval, }; // We trust the image color bit count to be correct at least. let max_sample = match self.color { ExtendedColorType::Unknown(n) if n <= 16 => (1 << n) - 1, ExtendedColorType::L1 => 1, ExtendedColorType::L8 | ExtendedColorType::La8 | ExtendedColorType::Rgb8 | ExtendedColorType::Rgba8 | ExtendedColorType::Bgr8 | ExtendedColorType::Bgra8 => 0xff, ExtendedColorType::L16 | ExtendedColorType::La16 | ExtendedColorType::Rgb16 | ExtendedColorType::Rgba16 => 0xffff, _ => { // Unsupported target color type. return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::Color(self.color), ), )); } }; // Avoid the performance heavy check if possible, e.g. if the header has been chosen by us. if header_maxval < max_sample && !image.all_smaller(header_maxval) { // Sample value greater than allowed for chosen header. return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Pnm.into(), UnsupportedErrorKind::GenericFeature( "Sample value greater than allowed for chosen header".to_owned(), ), ), )); } let encoding = image.encoding_for(&self.dimensions.unchecked.header.decoded); let image = CheckedImageBuffer::check( image, self.dimensions.width, self.dimensions.height, self.color, )?; Ok(CheckedHeader { color: self, encoding, _image: image, }) } } impl<'a> CheckedHeader<'a> { fn write_header(self, writer: &mut dyn Write) -> ImageResult> { self.header().write(writer)?; Ok(self.encoding) } fn header(&self) -> &PnmHeader { self.color.dimensions.unchecked.header } } struct SampleWriter<'a>(&'a mut dyn Write); impl SampleWriter<'_> { fn write_samples_ascii(self, samples: V) -> io::Result<()> where V: Iterator, V::Item: fmt::Display, { let mut auto_break_writer = AutoBreak::new(self.0, 70); for value in samples { write!(auto_break_writer, "{value} ")?; } auto_break_writer.flush() } fn write_pbm_bits(self, samples: &[V], width: u32) -> io::Result<()> /* Default gives 0 for all primitives. TODO: replace this with `Zeroable` once it hits stable */ where V: Default + Eq + Copy, { // The length of an encoded scanline let line_width = (width - 1) / 8 + 1; // We'll be writing single bytes, so buffer let mut line_buffer = Vec::with_capacity(line_width as usize); for line in samples.chunks(width as usize) { for byte_bits in line.chunks(8) { let mut byte = 0u8; for i in 0..8 { // Black pixels are encoded as 1s if let Some(&v) = byte_bits.get(i) { if v == V::default() { byte |= 1u8 << (7 - i); } } } line_buffer.push(byte); } self.0.write_all(line_buffer.as_slice())?; line_buffer.clear(); } self.0.flush() } } impl<'a> FlatSamples<'a> { fn len(&self) -> usize { match *self { FlatSamples::U8(arr) => arr.len(), FlatSamples::U16(arr) => arr.len(), } } fn all_smaller(&self, max_val: u32) -> bool { match *self { FlatSamples::U8(arr) => arr.iter().any(|&val| u32::from(val) > max_val), FlatSamples::U16(arr) => arr.iter().any(|&val| u32::from(val) > max_val), } } fn encoding_for(&self, header: &HeaderRecord) -> TupleEncoding<'a> { match *header { HeaderRecord::Bitmap(BitmapHeader { encoding: SampleEncoding::Binary, width, .. }) => TupleEncoding::PbmBits { samples: *self, width, }, HeaderRecord::Bitmap(BitmapHeader { encoding: SampleEncoding::Ascii, .. }) => TupleEncoding::Ascii { samples: *self }, HeaderRecord::Arbitrary(_) => TupleEncoding::Bytes { samples: *self }, HeaderRecord::Graymap(GraymapHeader { encoding: SampleEncoding::Ascii, .. }) | HeaderRecord::Pixmap(PixmapHeader { encoding: SampleEncoding::Ascii, .. }) => TupleEncoding::Ascii { samples: *self }, HeaderRecord::Graymap(GraymapHeader { encoding: SampleEncoding::Binary, .. }) | HeaderRecord::Pixmap(PixmapHeader { encoding: SampleEncoding::Binary, .. }) => TupleEncoding::Bytes { samples: *self }, } } } impl<'a> From<&'a [u8]> for FlatSamples<'a> { fn from(samples: &'a [u8]) -> Self { FlatSamples::U8(samples) } } impl<'a> From<&'a [u16]> for FlatSamples<'a> { fn from(samples: &'a [u16]) -> Self { FlatSamples::U16(samples) } } impl TupleEncoding<'_> { fn write_image(&self, writer: &mut dyn Write) -> ImageResult<()> { match *self { TupleEncoding::PbmBits { samples: FlatSamples::U8(samples), width, } => SampleWriter(writer) .write_pbm_bits(samples, width) .map_err(ImageError::IoError), TupleEncoding::PbmBits { samples: FlatSamples::U16(samples), width, } => SampleWriter(writer) .write_pbm_bits(samples, width) .map_err(ImageError::IoError), TupleEncoding::Bytes { samples: FlatSamples::U8(samples), } => writer.write_all(samples).map_err(ImageError::IoError), TupleEncoding::Bytes { samples: FlatSamples::U16(samples), } => samples.iter().try_for_each(|&sample| { writer .write_u16::(sample) .map_err(ImageError::IoError) }), TupleEncoding::Ascii { samples: FlatSamples::U8(samples), } => SampleWriter(writer) .write_samples_ascii(samples.iter()) .map_err(ImageError::IoError), TupleEncoding::Ascii { samples: FlatSamples::U16(samples), } => SampleWriter(writer) .write_samples_ascii(samples.iter()) .map_err(ImageError::IoError), } } } image-0.25.5/src/codecs/pnm/header.rs000064400000000000000000000256361046102023000154320ustar 00000000000000use std::{fmt, io}; /// The kind of encoding used to store sample values #[derive(Clone, Copy, PartialEq, Eq, Debug)] pub enum SampleEncoding { /// Samples are unsigned binary integers in big endian Binary, /// Samples are encoded as decimal ascii strings separated by whitespace Ascii, } /// Denotes the category of the magic number #[derive(Clone, Copy, PartialEq, Eq, Debug)] pub enum PnmSubtype { /// Magic numbers P1 and P4 Bitmap(SampleEncoding), /// Magic numbers P2 and P5 Graymap(SampleEncoding), /// Magic numbers P3 and P6 Pixmap(SampleEncoding), /// Magic number P7 ArbitraryMap, } /// Stores the complete header data of a file. /// /// Internally, provides mechanisms for lossless reencoding. After reading a file with the decoder /// it is possible to recover the header and construct an encoder. Using the encoder on the just /// loaded image should result in a byte copy of the original file (for single image pnms without /// additional trailing data). pub struct PnmHeader { pub(crate) decoded: HeaderRecord, pub(crate) encoded: Option>, } pub(crate) enum HeaderRecord { Bitmap(BitmapHeader), Graymap(GraymapHeader), Pixmap(PixmapHeader), Arbitrary(ArbitraryHeader), } /// Header produced by a `pbm` file ("Portable Bit Map") #[derive(Clone, Copy, Debug)] pub struct BitmapHeader { /// Binary or Ascii encoded file pub encoding: SampleEncoding, /// Height of the image file pub height: u32, /// Width of the image file pub width: u32, } /// Header produced by a `pgm` file ("Portable Gray Map") #[derive(Clone, Copy, Debug)] pub struct GraymapHeader { /// Binary or Ascii encoded file pub encoding: SampleEncoding, /// Height of the image file pub height: u32, /// Width of the image file pub width: u32, /// Maximum sample value within the image pub maxwhite: u32, } /// Header produced by a `ppm` file ("Portable Pixel Map") #[derive(Clone, Copy, Debug)] pub struct PixmapHeader { /// Binary or Ascii encoded file pub encoding: SampleEncoding, /// Height of the image file pub height: u32, /// Width of the image file pub width: u32, /// Maximum sample value within the image pub maxval: u32, } /// Header produced by a `pam` file ("Portable Arbitrary Map") #[derive(Clone, Debug)] pub struct ArbitraryHeader { /// Height of the image file pub height: u32, /// Width of the image file pub width: u32, /// Number of color channels pub depth: u32, /// Maximum sample value within the image pub maxval: u32, /// Color interpretation of image pixels pub tupltype: Option, } /// Standardized tuple type specifiers in the header of a `pam`. #[derive(Clone, Debug)] pub enum ArbitraryTuplType { /// Pixels are either black (0) or white (1) BlackAndWhite, /// Pixels are either black (0) or white (1) and a second alpha channel BlackAndWhiteAlpha, /// Pixels represent the amount of white Grayscale, /// Grayscale with an additional alpha channel GrayscaleAlpha, /// Three channels: Red, Green, Blue RGB, /// Four channels: Red, Green, Blue, Alpha RGBAlpha, /// An image format which is not standardized Custom(String), } impl ArbitraryTuplType { pub(crate) fn name(&self) -> &str { match self { ArbitraryTuplType::BlackAndWhite => "BLACKANDWHITE", ArbitraryTuplType::BlackAndWhiteAlpha => "BLACKANDWHITE_ALPHA", ArbitraryTuplType::Grayscale => "GRAYSCALE", ArbitraryTuplType::GrayscaleAlpha => "GRAYSCALE_ALPHA", ArbitraryTuplType::RGB => "RGB", ArbitraryTuplType::RGBAlpha => "RGB_ALPHA", ArbitraryTuplType::Custom(custom) => custom, } } } impl PnmSubtype { /// Get the two magic constant bytes corresponding to this format subtype. #[must_use] pub fn magic_constant(self) -> &'static [u8; 2] { match self { PnmSubtype::Bitmap(SampleEncoding::Ascii) => b"P1", PnmSubtype::Graymap(SampleEncoding::Ascii) => b"P2", PnmSubtype::Pixmap(SampleEncoding::Ascii) => b"P3", PnmSubtype::Bitmap(SampleEncoding::Binary) => b"P4", PnmSubtype::Graymap(SampleEncoding::Binary) => b"P5", PnmSubtype::Pixmap(SampleEncoding::Binary) => b"P6", PnmSubtype::ArbitraryMap => b"P7", } } /// Whether samples are stored as binary or as decimal ascii #[must_use] pub fn sample_encoding(self) -> SampleEncoding { match self { PnmSubtype::ArbitraryMap => SampleEncoding::Binary, PnmSubtype::Bitmap(enc) => enc, PnmSubtype::Graymap(enc) => enc, PnmSubtype::Pixmap(enc) => enc, } } } impl PnmHeader { /// Retrieve the format subtype from which the header was created. #[must_use] pub fn subtype(&self) -> PnmSubtype { match self.decoded { HeaderRecord::Bitmap(BitmapHeader { encoding, .. }) => PnmSubtype::Bitmap(encoding), HeaderRecord::Graymap(GraymapHeader { encoding, .. }) => PnmSubtype::Graymap(encoding), HeaderRecord::Pixmap(PixmapHeader { encoding, .. }) => PnmSubtype::Pixmap(encoding), HeaderRecord::Arbitrary(ArbitraryHeader { .. }) => PnmSubtype::ArbitraryMap, } } /// The width of the image this header is for. #[must_use] pub fn width(&self) -> u32 { match self.decoded { HeaderRecord::Bitmap(BitmapHeader { width, .. }) => width, HeaderRecord::Graymap(GraymapHeader { width, .. }) => width, HeaderRecord::Pixmap(PixmapHeader { width, .. }) => width, HeaderRecord::Arbitrary(ArbitraryHeader { width, .. }) => width, } } /// The height of the image this header is for. #[must_use] pub fn height(&self) -> u32 { match self.decoded { HeaderRecord::Bitmap(BitmapHeader { height, .. }) => height, HeaderRecord::Graymap(GraymapHeader { height, .. }) => height, HeaderRecord::Pixmap(PixmapHeader { height, .. }) => height, HeaderRecord::Arbitrary(ArbitraryHeader { height, .. }) => height, } } /// The biggest value a sample can have. In other words, the colour resolution. #[must_use] pub fn maximal_sample(&self) -> u32 { match self.decoded { HeaderRecord::Bitmap(BitmapHeader { .. }) => 1, HeaderRecord::Graymap(GraymapHeader { maxwhite, .. }) => maxwhite, HeaderRecord::Pixmap(PixmapHeader { maxval, .. }) => maxval, HeaderRecord::Arbitrary(ArbitraryHeader { maxval, .. }) => maxval, } } /// Retrieve the underlying bitmap header if any #[must_use] pub fn as_bitmap(&self) -> Option<&BitmapHeader> { match self.decoded { HeaderRecord::Bitmap(ref bitmap) => Some(bitmap), _ => None, } } /// Retrieve the underlying graymap header if any #[must_use] pub fn as_graymap(&self) -> Option<&GraymapHeader> { match self.decoded { HeaderRecord::Graymap(ref graymap) => Some(graymap), _ => None, } } /// Retrieve the underlying pixmap header if any #[must_use] pub fn as_pixmap(&self) -> Option<&PixmapHeader> { match self.decoded { HeaderRecord::Pixmap(ref pixmap) => Some(pixmap), _ => None, } } /// Retrieve the underlying arbitrary header if any #[must_use] pub fn as_arbitrary(&self) -> Option<&ArbitraryHeader> { match self.decoded { HeaderRecord::Arbitrary(ref arbitrary) => Some(arbitrary), _ => None, } } /// Write the header back into a binary stream pub fn write(&self, writer: &mut dyn io::Write) -> io::Result<()> { writer.write_all(self.subtype().magic_constant())?; match *self { PnmHeader { encoded: Some(ref content), .. } => writer.write_all(content), PnmHeader { decoded: HeaderRecord::Bitmap(BitmapHeader { encoding: _encoding, width, height, }), .. } => writeln!(writer, "\n{width} {height}"), PnmHeader { decoded: HeaderRecord::Graymap(GraymapHeader { encoding: _encoding, width, height, maxwhite, }), .. } => writeln!(writer, "\n{width} {height} {maxwhite}"), PnmHeader { decoded: HeaderRecord::Pixmap(PixmapHeader { encoding: _encoding, width, height, maxval, }), .. } => writeln!(writer, "\n{width} {height} {maxval}"), PnmHeader { decoded: HeaderRecord::Arbitrary(ArbitraryHeader { width, height, depth, maxval, ref tupltype, }), .. } => { struct TupltypeWriter<'a>(&'a Option); impl fmt::Display for TupltypeWriter<'_> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self.0 { Some(tt) => writeln!(f, "TUPLTYPE {}", tt.name()), None => Ok(()), } } } writeln!( writer, "\nWIDTH {}\nHEIGHT {}\nDEPTH {}\nMAXVAL {}\n{}ENDHDR", width, height, depth, maxval, TupltypeWriter(tupltype) ) } } } } impl From for PnmHeader { fn from(header: BitmapHeader) -> Self { PnmHeader { decoded: HeaderRecord::Bitmap(header), encoded: None, } } } impl From for PnmHeader { fn from(header: GraymapHeader) -> Self { PnmHeader { decoded: HeaderRecord::Graymap(header), encoded: None, } } } impl From for PnmHeader { fn from(header: PixmapHeader) -> Self { PnmHeader { decoded: HeaderRecord::Pixmap(header), encoded: None, } } } impl From for PnmHeader { fn from(header: ArbitraryHeader) -> Self { PnmHeader { decoded: HeaderRecord::Arbitrary(header), encoded: None, } } } image-0.25.5/src/codecs/pnm/mod.rs000064400000000000000000000142401046102023000147460ustar 00000000000000//! Decoding of netpbm image formats (pbm, pgm, ppm and pam). //! //! The formats pbm, pgm and ppm are fully supported. The pam decoder recognizes the tuple types //! `BLACKANDWHITE`, `GRAYSCALE` and `RGB` and explicitly recognizes but rejects their `_ALPHA` //! variants for now as alpha color types are unsupported. use self::autobreak::AutoBreak; pub use self::decoder::PnmDecoder; pub use self::encoder::PnmEncoder; use self::header::HeaderRecord; pub use self::header::{ ArbitraryHeader, ArbitraryTuplType, BitmapHeader, GraymapHeader, PixmapHeader, }; pub use self::header::{PnmHeader, PnmSubtype, SampleEncoding}; mod autobreak; mod decoder; mod encoder; mod header; #[cfg(test)] mod tests { use super::*; use crate::image::ImageDecoder; use crate::ExtendedColorType; use byteorder_lite::{ByteOrder, NativeEndian}; fn execute_roundtrip_default(buffer: &[u8], width: u32, height: u32, color: ExtendedColorType) { let mut encoded_buffer = Vec::new(); { let mut encoder = PnmEncoder::new(&mut encoded_buffer); encoder .encode(buffer, width, height, color) .expect("Failed to encode the image buffer"); } let (header, loaded_color, loaded_image) = { let decoder = PnmDecoder::new(&encoded_buffer[..]).unwrap(); let color_type = decoder.color_type(); let mut image = vec![0; decoder.total_bytes() as usize]; decoder .read_image(&mut image) .expect("Failed to decode the image"); let (_, header) = PnmDecoder::new(&encoded_buffer[..]).unwrap().into_inner(); (header, color_type, image) }; assert_eq!(header.width(), width); assert_eq!(header.height(), height); assert_eq!(ExtendedColorType::from(loaded_color), color); assert_eq!(loaded_image.as_slice(), buffer); } fn execute_roundtrip_with_subtype( buffer: &[u8], width: u32, height: u32, color: ExtendedColorType, subtype: PnmSubtype, ) { let mut encoded_buffer = Vec::new(); { let mut encoder = PnmEncoder::new(&mut encoded_buffer).with_subtype(subtype); encoder .encode(buffer, width, height, color) .expect("Failed to encode the image buffer"); } let (header, loaded_color, loaded_image) = { let decoder = PnmDecoder::new(&encoded_buffer[..]).unwrap(); let color_type = decoder.color_type(); let mut image = vec![0; decoder.total_bytes() as usize]; decoder .read_image(&mut image) .expect("Failed to decode the image"); let (_, header) = PnmDecoder::new(&encoded_buffer[..]).unwrap().into_inner(); (header, color_type, image) }; assert_eq!(header.width(), width); assert_eq!(header.height(), height); assert_eq!(header.subtype(), subtype); assert_eq!(ExtendedColorType::from(loaded_color), color); assert_eq!(loaded_image.as_slice(), buffer); } fn execute_roundtrip_u16(buffer: &[u16], width: u32, height: u32, color: ExtendedColorType) { let mut encoded_buffer = Vec::new(); { let mut encoder = PnmEncoder::new(&mut encoded_buffer); encoder .encode(buffer, width, height, color) .expect("Failed to encode the image buffer"); } let (header, loaded_color, loaded_image) = { let decoder = PnmDecoder::new(&encoded_buffer[..]).unwrap(); let color_type = decoder.color_type(); let mut image = vec![0; decoder.total_bytes() as usize]; decoder .read_image(&mut image) .expect("Failed to decode the image"); let (_, header) = PnmDecoder::new(&encoded_buffer[..]).unwrap().into_inner(); (header, color_type, image) }; let mut buffer_u8 = vec![0; buffer.len() * 2]; NativeEndian::write_u16_into(buffer, &mut buffer_u8[..]); assert_eq!(header.width(), width); assert_eq!(header.height(), height); assert_eq!(ExtendedColorType::from(loaded_color), color); assert_eq!(loaded_image, buffer_u8); } #[test] fn roundtrip_gray() { #[rustfmt::skip] let buf: [u8; 16] = [ 0, 0, 0, 255, 255, 255, 255, 255, 255, 0, 255, 0, 255, 0, 0, 0, ]; execute_roundtrip_default(&buf, 4, 4, ExtendedColorType::L8); execute_roundtrip_with_subtype(&buf, 4, 4, ExtendedColorType::L8, PnmSubtype::ArbitraryMap); execute_roundtrip_with_subtype( &buf, 4, 4, ExtendedColorType::L8, PnmSubtype::Graymap(SampleEncoding::Ascii), ); execute_roundtrip_with_subtype( &buf, 4, 4, ExtendedColorType::L8, PnmSubtype::Graymap(SampleEncoding::Binary), ); } #[test] fn roundtrip_rgb() { #[rustfmt::skip] let buf: [u8; 27] = [ 0, 0, 0, 0, 0, 255, 0, 255, 0, 0, 255, 255, 255, 0, 0, 255, 0, 255, 255, 255, 0, 255, 255, 255, 255, 255, 255, ]; execute_roundtrip_default(&buf, 3, 3, ExtendedColorType::Rgb8); execute_roundtrip_with_subtype( &buf, 3, 3, ExtendedColorType::Rgb8, PnmSubtype::ArbitraryMap, ); execute_roundtrip_with_subtype( &buf, 3, 3, ExtendedColorType::Rgb8, PnmSubtype::Pixmap(SampleEncoding::Binary), ); execute_roundtrip_with_subtype( &buf, 3, 3, ExtendedColorType::Rgb8, PnmSubtype::Pixmap(SampleEncoding::Ascii), ); } #[test] fn roundtrip_u16() { let buf: [u16; 6] = [0, 1, 0xFFFF, 0x1234, 0x3412, 0xBEAF]; execute_roundtrip_u16(&buf, 6, 1, ExtendedColorType::L16); } } image-0.25.5/src/codecs/qoi.rs000064400000000000000000000062331046102023000141700ustar 00000000000000//! Decoding and encoding of QOI images use crate::error::{DecodingError, EncodingError}; use crate::{ ColorType, ExtendedColorType, ImageDecoder, ImageEncoder, ImageError, ImageFormat, ImageResult, }; use std::io::{Read, Write}; /// QOI decoder pub struct QoiDecoder { decoder: qoi::Decoder, } impl QoiDecoder where R: Read, { /// Creates a new decoder that decodes from the stream ```reader``` pub fn new(reader: R) -> ImageResult { let decoder = qoi::Decoder::from_stream(reader).map_err(decoding_error)?; Ok(Self { decoder }) } } impl ImageDecoder for QoiDecoder { fn dimensions(&self) -> (u32, u32) { (self.decoder.header().width, self.decoder.header().height) } fn color_type(&self) -> ColorType { match self.decoder.header().channels { qoi::Channels::Rgb => ColorType::Rgb8, qoi::Channels::Rgba => ColorType::Rgba8, } } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { self.decoder.decode_to_buf(buf).map_err(decoding_error)?; Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } fn decoding_error(error: qoi::Error) -> ImageError { ImageError::Decoding(DecodingError::new(ImageFormat::Qoi.into(), error)) } fn encoding_error(error: qoi::Error) -> ImageError { ImageError::Encoding(EncodingError::new(ImageFormat::Qoi.into(), error)) } /// QOI encoder pub struct QoiEncoder { writer: W, } impl QoiEncoder { /// Creates a new encoder that writes its output to ```writer``` pub fn new(writer: W) -> Self { Self { writer } } } impl ImageEncoder for QoiEncoder { #[track_caller] fn write_image( mut self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { if !matches!( color_type, ExtendedColorType::Rgba8 | ExtendedColorType::Rgb8 ) { return Err(ImageError::Encoding(EncodingError::new( ImageFormat::Qoi.into(), format!("unsupported color type {color_type:?}. Supported are Rgba8 and Rgb8."), ))); } let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); // Encode data in QOI let data = qoi::encode_to_vec(buf, width, height).map_err(encoding_error)?; // Write data to buffer self.writer.write_all(&data[..])?; self.writer.flush()?; Ok(()) } } #[cfg(test)] mod tests { use super::*; use std::fs::File; #[test] fn decode_test_image() { let decoder = QoiDecoder::new(File::open("tests/images/qoi/basic-test.qoi").unwrap()) .expect("Unable to read QOI file"); assert_eq!((5, 5), decoder.dimensions()); assert_eq!(ColorType::Rgba8, decoder.color_type()); } } image-0.25.5/src/codecs/tga/decoder.rs000064400000000000000000000337661046102023000155730ustar 00000000000000use super::header::{Header, ImageType, ALPHA_BIT_MASK, SCREEN_ORIGIN_BIT_MASK}; use crate::{ color::{ColorType, ExtendedColorType}, error::{ ImageError, ImageResult, LimitError, LimitErrorKind, UnsupportedError, UnsupportedErrorKind, }, image::{ImageDecoder, ImageFormat}, }; use byteorder_lite::ReadBytesExt; use std::io::{self, Read}; struct ColorMap { /// sizes in bytes start_offset: usize, entry_size: usize, bytes: Vec, } impl ColorMap { pub(crate) fn from_reader( r: &mut dyn Read, start_offset: u16, num_entries: u16, bits_per_entry: u8, ) -> ImageResult { let bytes_per_entry = (bits_per_entry as usize + 7) / 8; let mut bytes = vec![0; bytes_per_entry * num_entries as usize]; r.read_exact(&mut bytes)?; Ok(ColorMap { entry_size: bytes_per_entry, start_offset: start_offset as usize, bytes, }) } /// Get one entry from the color map pub(crate) fn get(&self, index: usize) -> Option<&[u8]> { let entry = self.start_offset + self.entry_size * index; self.bytes.get(entry..entry + self.entry_size) } } /// The representation of a TGA decoder pub struct TgaDecoder { r: R, width: usize, height: usize, bytes_per_pixel: usize, has_loaded_metadata: bool, image_type: ImageType, color_type: ColorType, original_color_type: Option, header: Header, color_map: Option, } impl TgaDecoder { /// Create a new decoder that decodes from the stream `r` pub fn new(r: R) -> ImageResult> { let mut decoder = TgaDecoder { r, width: 0, height: 0, bytes_per_pixel: 0, has_loaded_metadata: false, image_type: ImageType::Unknown, color_type: ColorType::L8, original_color_type: None, header: Header::default(), color_map: None, }; decoder.read_metadata()?; Ok(decoder) } fn read_header(&mut self) -> ImageResult<()> { self.header = Header::from_reader(&mut self.r)?; self.image_type = ImageType::new(self.header.image_type); self.width = self.header.image_width as usize; self.height = self.header.image_height as usize; self.bytes_per_pixel = (self.header.pixel_depth as usize + 7) / 8; Ok(()) } fn read_metadata(&mut self) -> ImageResult<()> { if !self.has_loaded_metadata { self.read_header()?; self.read_image_id()?; self.read_color_map()?; self.read_color_information()?; self.has_loaded_metadata = true; } Ok(()) } /// Loads the color information for the decoder /// /// To keep things simple, we won't handle bit depths that aren't divisible /// by 8 and are larger than 32. fn read_color_information(&mut self) -> ImageResult<()> { if self.header.pixel_depth % 8 != 0 || self.header.pixel_depth > 32 { // Bit depth must be divisible by 8, and must be less than or equal // to 32. return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Tga.into(), UnsupportedErrorKind::Color(ExtendedColorType::Unknown( self.header.pixel_depth, )), ), )); } let num_alpha_bits = self.header.image_desc & ALPHA_BIT_MASK; let other_channel_bits = if self.header.map_type != 0 { self.header.map_entry_size } else { if num_alpha_bits > self.header.pixel_depth { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Tga.into(), UnsupportedErrorKind::Color(ExtendedColorType::Unknown( self.header.pixel_depth, )), ), )); } self.header.pixel_depth - num_alpha_bits }; let color = self.image_type.is_color(); match (num_alpha_bits, other_channel_bits, color) { // really, the encoding is BGR and BGRA, this is fixed // up with `TgaDecoder::reverse_encoding`. (0, 32, true) => self.color_type = ColorType::Rgba8, (8, 24, true) => self.color_type = ColorType::Rgba8, (0, 24, true) => self.color_type = ColorType::Rgb8, (8, 8, false) => self.color_type = ColorType::La8, (0, 8, false) => self.color_type = ColorType::L8, (8, 0, false) => { // alpha-only image is treated as L8 self.color_type = ColorType::L8; self.original_color_type = Some(ExtendedColorType::A8); } _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Tga.into(), UnsupportedErrorKind::Color(ExtendedColorType::Unknown( self.header.pixel_depth, )), ), )) } } Ok(()) } /// Read the image id field /// /// We're not interested in this field, so this function skips it if it /// is present fn read_image_id(&mut self) -> ImageResult<()> { self.r .read_exact(&mut vec![0; self.header.id_length as usize])?; Ok(()) } fn read_color_map(&mut self) -> ImageResult<()> { if self.header.map_type == 1 { // FIXME: we could reverse the map entries, which avoids having to reverse all pixels // in the final output individually. self.color_map = Some(ColorMap::from_reader( &mut self.r, self.header.map_origin, self.header.map_length, self.header.map_entry_size, )?); } Ok(()) } /// Expands indices into its mapped color fn expand_color_map(&self, pixel_data: &[u8]) -> io::Result> { #[inline] fn bytes_to_index(bytes: &[u8]) -> usize { let mut result = 0usize; for byte in bytes { result = result << 8 | *byte as usize; } result } let bytes_per_entry = (self.header.map_entry_size as usize + 7) / 8; let mut result = Vec::with_capacity(self.width * self.height * bytes_per_entry); if self.bytes_per_pixel == 0 { return Err(io::ErrorKind::Other.into()); } let color_map = self .color_map .as_ref() .ok_or_else(|| io::Error::from(io::ErrorKind::Other))?; for chunk in pixel_data.chunks(self.bytes_per_pixel) { let index = bytes_to_index(chunk); if let Some(color) = color_map.get(index) { result.extend_from_slice(color); } else { return Err(io::ErrorKind::Other.into()); } } Ok(result) } /// Reads a run length encoded data for given number of bytes fn read_encoded_data(&mut self, num_bytes: usize) -> io::Result> { let mut pixel_data = Vec::with_capacity(num_bytes); let mut repeat_buf = Vec::with_capacity(self.bytes_per_pixel); while pixel_data.len() < num_bytes { let run_packet = self.r.read_u8()?; // If the highest bit in `run_packet` is set, then we repeat pixels // // Note: the TGA format adds 1 to both counts because having a count // of 0 would be pointless. if (run_packet & 0x80) != 0 { // high bit set, so we will repeat the data let repeat_count = ((run_packet & !0x80) + 1) as usize; self.r .by_ref() .take(self.bytes_per_pixel as u64) .read_to_end(&mut repeat_buf)?; // get the repeating pixels from the bytes of the pixel stored in `repeat_buf` let data = repeat_buf .iter() .cycle() .take(repeat_count * self.bytes_per_pixel); pixel_data.extend(data); repeat_buf.clear(); } else { // not set, so `run_packet+1` is the number of non-encoded pixels let num_raw_bytes = (run_packet + 1) as usize * self.bytes_per_pixel; self.r .by_ref() .take(num_raw_bytes as u64) .read_to_end(&mut pixel_data)?; } } if pixel_data.len() > num_bytes { // FIXME: the last packet contained more data than we asked for! // This is at least a warning. We truncate the data since some methods rely on the // length to be accurate in the success case. pixel_data.truncate(num_bytes); } Ok(pixel_data) } /// Reads a run length encoded packet fn read_all_encoded_data(&mut self) -> ImageResult> { let num_bytes = self.width * self.height * self.bytes_per_pixel; Ok(self.read_encoded_data(num_bytes)?) } /// Reverse from BGR encoding to RGB encoding /// /// TGA files are stored in the BGRA encoding. This function swaps /// the blue and red bytes in the `pixels` array. fn reverse_encoding_in_output(&mut self, pixels: &mut [u8]) { // We only need to reverse the encoding of color images match self.color_type { ColorType::Rgb8 | ColorType::Rgba8 => { for chunk in pixels.chunks_mut(self.color_type.bytes_per_pixel().into()) { chunk.swap(0, 2); } } _ => {} } } /// Flip the image vertically depending on the screen origin bit /// /// The bit in position 5 of the image descriptor byte is the screen origin bit. /// If it's 1, the origin is in the top left corner. /// If it's 0, the origin is in the bottom left corner. /// This function checks the bit, and if it's 0, flips the image vertically. fn flip_vertically(&mut self, pixels: &mut [u8]) { if self.is_flipped_vertically() { if self.height == 0 { return; } let num_bytes = pixels.len(); let width_bytes = num_bytes / self.height; // Flip the image vertically. for vertical_index in 0..(self.height / 2) { let vertical_target = (self.height - vertical_index) * width_bytes - width_bytes; for horizontal_index in 0..width_bytes { let source = vertical_index * width_bytes + horizontal_index; let target = vertical_target + horizontal_index; pixels.swap(target, source); } } } } /// Check whether the image is vertically flipped /// /// The bit in position 5 of the image descriptor byte is the screen origin bit. /// If it's 1, the origin is in the top left corner. /// If it's 0, the origin is in the bottom left corner. /// This function checks the bit, and if it's 0, flips the image vertically. fn is_flipped_vertically(&self) -> bool { let screen_origin_bit = SCREEN_ORIGIN_BIT_MASK & self.header.image_desc != 0; !screen_origin_bit } } impl ImageDecoder for TgaDecoder { fn dimensions(&self) -> (u32, u32) { (self.width as u32, self.height as u32) } fn color_type(&self) -> ColorType { self.color_type } fn original_color_type(&self) -> ExtendedColorType { self.original_color_type .unwrap_or_else(|| self.color_type().into()) } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); // In indexed images, we might need more bytes than pixels to read them. That's nonsensical // to encode but we'll not want to crash. let mut fallback_buf = vec![]; // read the pixels from the data region let rawbuf = if self.image_type.is_encoded() { let pixel_data = self.read_all_encoded_data()?; if self.bytes_per_pixel <= usize::from(self.color_type.bytes_per_pixel()) { buf[..pixel_data.len()].copy_from_slice(&pixel_data); &buf[..pixel_data.len()] } else { fallback_buf = pixel_data; &fallback_buf[..] } } else { let num_raw_bytes = self.width * self.height * self.bytes_per_pixel; if self.bytes_per_pixel <= usize::from(self.color_type.bytes_per_pixel()) { self.r.by_ref().read_exact(&mut buf[..num_raw_bytes])?; &buf[..num_raw_bytes] } else { fallback_buf.resize(num_raw_bytes, 0u8); self.r .by_ref() .read_exact(&mut fallback_buf[..num_raw_bytes])?; &fallback_buf[..num_raw_bytes] } }; // expand the indices using the color map if necessary if self.image_type.is_color_mapped() { let pixel_data = self.expand_color_map(rawbuf)?; // not enough data to fill the buffer, or would overflow the buffer if pixel_data.len() != buf.len() { return Err(ImageError::Limits(LimitError::from_kind( LimitErrorKind::DimensionError, ))); } buf.copy_from_slice(&pixel_data); } self.reverse_encoding_in_output(buf); self.flip_vertically(buf); Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } image-0.25.5/src/codecs/tga/encoder.rs000064400000000000000000000415441046102023000155760ustar 00000000000000use super::header::Header; use crate::{ codecs::tga::header::ImageType, error::EncodingError, ExtendedColorType, ImageEncoder, ImageError, ImageFormat, ImageResult, }; use std::{error, fmt, io::Write}; /// Errors that can occur during encoding and saving of a TGA image. #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] enum EncoderError { /// Invalid TGA width. WidthInvalid(u32), /// Invalid TGA height. HeightInvalid(u32), } impl fmt::Display for EncoderError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { EncoderError::WidthInvalid(s) => f.write_fmt(format_args!("Invalid TGA width: {s}")), EncoderError::HeightInvalid(s) => f.write_fmt(format_args!("Invalid TGA height: {s}")), } } } impl From for ImageError { fn from(e: EncoderError) -> ImageError { ImageError::Encoding(EncodingError::new(ImageFormat::Tga.into(), e)) } } impl error::Error for EncoderError {} /// TGA encoder. pub struct TgaEncoder { writer: W, /// Run-length encoding use_rle: bool, } const MAX_RUN_LENGTH: u8 = 128; #[derive(Debug, Eq, PartialEq)] enum PacketType { Raw, Rle, } impl TgaEncoder { /// Create a new encoder that writes its output to ```w```. pub fn new(w: W) -> TgaEncoder { TgaEncoder { writer: w, use_rle: true, } } /// Disables run-length encoding pub fn disable_rle(mut self) -> TgaEncoder { self.use_rle = false; self } /// Writes a raw packet to the writer fn write_raw_packet(&mut self, pixels: &[u8], counter: u8) -> ImageResult<()> { // Set high bit = 0 and store counter - 1 (because 0 would be useless) // The counter fills 7 bits max, so the high bit is set to 0 implicitly let header = counter - 1; self.writer.write_all(&[header])?; self.writer.write_all(pixels)?; Ok(()) } /// Writes a run-length encoded packet to the writer fn write_rle_encoded_packet(&mut self, pixel: &[u8], counter: u8) -> ImageResult<()> { // Set high bit = 1 and store counter - 1 (because 0 would be useless) let header = 0x80 | (counter - 1); self.writer.write_all(&[header])?; self.writer.write_all(pixel)?; Ok(()) } /// Writes the run-length encoded buffer to the writer fn run_length_encode( &mut self, image: &[u8], color_type: ExtendedColorType, ) -> ImageResult<()> { use PacketType::*; let bytes_per_pixel = color_type.bits_per_pixel() / 8; let capacity_in_bytes = usize::from(MAX_RUN_LENGTH) * usize::from(bytes_per_pixel); // Buffer to temporarily store pixels // so we can choose whether to use RLE or not when we need to let mut buf = Vec::with_capacity(capacity_in_bytes); let mut counter = 0; let mut prev_pixel = None; let mut packet_type = Rle; for pixel in image.chunks(usize::from(bytes_per_pixel)) { // Make sure we are not at the first pixel if let Some(prev) = prev_pixel { if pixel == prev { if packet_type == Raw && counter > 0 { self.write_raw_packet(&buf, counter)?; counter = 0; buf.clear(); } packet_type = Rle; } else if packet_type == Rle && counter > 0 { self.write_rle_encoded_packet(prev, counter)?; counter = 0; packet_type = Raw; buf.clear(); } } counter += 1; buf.extend_from_slice(pixel); debug_assert!(buf.len() <= capacity_in_bytes); if counter == MAX_RUN_LENGTH { match packet_type { Rle => self.write_rle_encoded_packet(prev_pixel.unwrap(), counter), Raw => self.write_raw_packet(&buf, counter), }?; counter = 0; packet_type = Rle; buf.clear(); } prev_pixel = Some(pixel); } if counter > 0 { match packet_type { Rle => self.write_rle_encoded_packet(prev_pixel.unwrap(), counter), Raw => self.write_raw_packet(&buf, counter), }?; } Ok(()) } /// Encodes the image ```buf``` that has dimensions ```width``` /// and ```height``` and ```ColorType``` ```color_type```. /// /// The dimensions of the image must be between 0 and 65535 (inclusive) or /// an error will be returned. /// /// # Panics /// /// Panics if `width * height * color_type.bytes_per_pixel() != data.len()`. #[track_caller] pub fn encode( mut self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); // Validate dimensions. let width = u16::try_from(width) .map_err(|_| ImageError::from(EncoderError::WidthInvalid(width)))?; let height = u16::try_from(height) .map_err(|_| ImageError::from(EncoderError::HeightInvalid(height)))?; // Write out TGA header. let header = Header::from_pixel_info(color_type, width, height, self.use_rle)?; header.write_to(&mut self.writer)?; let image_type = ImageType::new(header.image_type); match image_type { //TODO: support RunColorMap, and change match to image_type.is_encoded() ImageType::RunTrueColor | ImageType::RunGrayScale => { // Write run-length encoded image data match color_type { ExtendedColorType::Rgb8 | ExtendedColorType::Rgba8 => { let mut image = Vec::from(buf); for pixel in image.chunks_mut(usize::from(color_type.bits_per_pixel() / 8)) { pixel.swap(0, 2); } self.run_length_encode(&image, color_type)?; } _ => { self.run_length_encode(buf, color_type)?; } } } _ => { // Write uncompressed image data match color_type { ExtendedColorType::Rgb8 | ExtendedColorType::Rgba8 => { let mut image = Vec::from(buf); for pixel in image.chunks_mut(usize::from(color_type.bits_per_pixel() / 8)) { pixel.swap(0, 2); } self.writer.write_all(&image)?; } _ => { self.writer.write_all(buf)?; } } } } Ok(()) } } impl ImageEncoder for TgaEncoder { #[track_caller] fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { self.encode(buf, width, height, color_type) } } #[cfg(test)] mod tests { use super::{EncoderError, TgaEncoder}; use crate::{codecs::tga::TgaDecoder, ExtendedColorType, ImageDecoder, ImageError}; use std::{error::Error, io::Cursor}; #[test] fn test_image_width_too_large() { // TGA cannot encode images larger than 65,535×65,535 // create a 65,536×1 8-bit black image buffer let size = usize::from(u16::MAX) + 1; let dimension = size as u32; let img = vec![0u8; size]; // Try to encode an image that is too large let mut encoded = Vec::new(); let encoder = TgaEncoder::new(&mut encoded); let result = encoder.encode(&img, dimension, 1, ExtendedColorType::L8); match result { Err(ImageError::Encoding(err)) => { let err = err .source() .unwrap() .downcast_ref::() .unwrap(); assert_eq!(*err, EncoderError::WidthInvalid(dimension)); } other => panic!( "Encoding an image that is too wide should return a InvalidWidth \ it returned {:?} instead", other ), } } #[test] fn test_image_height_too_large() { // TGA cannot encode images larger than 65,535×65,535 // create a 65,536×1 8-bit black image buffer let size = usize::from(u16::MAX) + 1; let dimension = size as u32; let img = vec![0u8; size]; // Try to encode an image that is too large let mut encoded = Vec::new(); let encoder = TgaEncoder::new(&mut encoded); let result = encoder.encode(&img, 1, dimension, ExtendedColorType::L8); match result { Err(ImageError::Encoding(err)) => { let err = err .source() .unwrap() .downcast_ref::() .unwrap(); assert_eq!(*err, EncoderError::HeightInvalid(dimension)); } other => panic!( "Encoding an image that is too tall should return a InvalidHeight \ it returned {:?} instead", other ), } } #[test] fn test_compression_diff() { let image = [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2]; let uncompressed_bytes = { let mut encoded_data = Vec::new(); let encoder = TgaEncoder::new(&mut encoded_data).disable_rle(); encoder .encode(&image, 5, 1, ExtendedColorType::Rgb8) .expect("could not encode image"); encoded_data }; let compressed_bytes = { let mut encoded_data = Vec::new(); let encoder = TgaEncoder::new(&mut encoded_data); encoder .encode(&image, 5, 1, ExtendedColorType::Rgb8) .expect("could not encode image"); encoded_data }; assert!(uncompressed_bytes.len() > compressed_bytes.len()); } mod compressed { use super::*; fn round_trip_image( image: &[u8], width: u32, height: u32, c: ExtendedColorType, ) -> Vec { let mut encoded_data = Vec::new(); { let encoder = TgaEncoder::new(&mut encoded_data); encoder .encode(image, width, height, c) .expect("could not encode image"); } let decoder = TgaDecoder::new(Cursor::new(&encoded_data)).expect("failed to decode"); let mut buf = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut buf).expect("failed to decode"); buf } #[test] fn mixed_packets() { let image = [ 255, 255, 255, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, ]; let decoded = round_trip_image(&image, 5, 1, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_gray() { let image = [0, 1, 2]; let decoded = round_trip_image(&image, 3, 1, ExtendedColorType::L8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_graya() { let image = [0, 1, 2, 3, 4, 5]; let decoded = round_trip_image(&image, 1, 3, ExtendedColorType::La8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_single_pixel_rgb() { let image = [0, 1, 2]; let decoded = round_trip_image(&image, 1, 1, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_three_pixel_rgb() { let image = [0, 1, 2, 0, 1, 2, 0, 1, 2]; let decoded = round_trip_image(&image, 3, 1, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_3px_rgb() { let image = [0; 3 * 3 * 3]; // 3x3 pixels, 3 bytes per pixel let decoded = round_trip_image(&image, 3, 3, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_different() { let image = [0, 1, 2, 0, 1, 3, 0, 1, 4]; let decoded = round_trip_image(&image, 3, 1, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_different_2() { let image = [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 4]; let decoded = round_trip_image(&image, 4, 1, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_different_3() { let image = [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 4, 0, 1, 2]; let decoded = round_trip_image(&image, 5, 1, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_bw() { // This example demonstrates the run-length counter being saturated // It should never overflow and can be 128 max let image = crate::open("tests/images/tga/encoding/black_white.tga").unwrap(); let (width, height) = (image.width(), image.height()); let image = image.as_rgb8().unwrap().to_vec(); let decoded = round_trip_image(&image, width, height, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } } mod uncompressed { use super::*; fn round_trip_image( image: &[u8], width: u32, height: u32, c: ExtendedColorType, ) -> Vec { let mut encoded_data = Vec::new(); { let encoder = TgaEncoder::new(&mut encoded_data).disable_rle(); encoder .encode(image, width, height, c) .expect("could not encode image"); } let decoder = TgaDecoder::new(Cursor::new(&encoded_data)).expect("failed to decode"); let mut buf = vec![0; decoder.total_bytes() as usize]; decoder.read_image(&mut buf).expect("failed to decode"); buf } #[test] fn round_trip_single_pixel_rgb() { let image = [0, 1, 2]; let decoded = round_trip_image(&image, 1, 1, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_single_pixel_rgba() { let image = [0, 1, 2, 3]; let decoded = round_trip_image(&image, 1, 1, ExtendedColorType::Rgba8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_gray() { let image = [0, 1, 2]; let decoded = round_trip_image(&image, 3, 1, ExtendedColorType::L8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_graya() { let image = [0, 1, 2, 3, 4, 5]; let decoded = round_trip_image(&image, 1, 3, ExtendedColorType::La8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } #[test] fn round_trip_3px_rgb() { let image = [0; 3 * 3 * 3]; // 3x3 pixels, 3 bytes per pixel let decoded = round_trip_image(&image, 3, 3, ExtendedColorType::Rgb8); assert_eq!(decoded.len(), image.len()); assert_eq!(decoded.as_slice(), image); } } } image-0.25.5/src/codecs/tga/header.rs000064400000000000000000000131721046102023000154030ustar 00000000000000use crate::error::{UnsupportedError, UnsupportedErrorKind}; use crate::{ExtendedColorType, ImageError, ImageFormat, ImageResult}; use byteorder_lite::{LittleEndian, ReadBytesExt, WriteBytesExt}; use std::io::{Read, Write}; pub(crate) const ALPHA_BIT_MASK: u8 = 0b1111; pub(crate) const SCREEN_ORIGIN_BIT_MASK: u8 = 0b10_0000; pub(crate) enum ImageType { NoImageData = 0, /// Uncompressed images. RawColorMap = 1, RawTrueColor = 2, RawGrayScale = 3, /// Run length encoded images. RunColorMap = 9, RunTrueColor = 10, RunGrayScale = 11, Unknown, } impl ImageType { /// Create a new image type from a u8. pub(crate) fn new(img_type: u8) -> ImageType { match img_type { 0 => ImageType::NoImageData, 1 => ImageType::RawColorMap, 2 => ImageType::RawTrueColor, 3 => ImageType::RawGrayScale, 9 => ImageType::RunColorMap, 10 => ImageType::RunTrueColor, 11 => ImageType::RunGrayScale, _ => ImageType::Unknown, } } /// Check if the image format uses colors as opposed to gray scale. pub(crate) fn is_color(&self) -> bool { matches! { *self, ImageType::RawColorMap | ImageType::RawTrueColor | ImageType::RunTrueColor | ImageType::RunColorMap } } /// Does the image use a color map. pub(crate) fn is_color_mapped(&self) -> bool { matches! { *self, ImageType::RawColorMap | ImageType::RunColorMap } } /// Is the image run length encoded. pub(crate) fn is_encoded(&self) -> bool { matches! {*self, ImageType::RunColorMap | ImageType::RunTrueColor | ImageType::RunGrayScale } } } /// Header used by TGA image files. #[derive(Debug, Default)] pub(crate) struct Header { pub(crate) id_length: u8, // length of ID string pub(crate) map_type: u8, // color map type pub(crate) image_type: u8, // image type code pub(crate) map_origin: u16, // starting index of map pub(crate) map_length: u16, // length of map pub(crate) map_entry_size: u8, // size of map entries in bits pub(crate) x_origin: u16, // x-origin of image pub(crate) y_origin: u16, // y-origin of image pub(crate) image_width: u16, // width of image pub(crate) image_height: u16, // height of image pub(crate) pixel_depth: u8, // bits per pixel pub(crate) image_desc: u8, // image descriptor } impl Header { /// Load the header with values from pixel information. pub(crate) fn from_pixel_info( color_type: ExtendedColorType, width: u16, height: u16, use_rle: bool, ) -> ImageResult { let mut header = Self::default(); if width > 0 && height > 0 { let (num_alpha_bits, other_channel_bits, image_type) = match (color_type, use_rle) { (ExtendedColorType::Rgba8, true) => (8, 24, ImageType::RunTrueColor), (ExtendedColorType::Rgb8, true) => (0, 24, ImageType::RunTrueColor), (ExtendedColorType::La8, true) => (8, 8, ImageType::RunGrayScale), (ExtendedColorType::L8, true) => (0, 8, ImageType::RunGrayScale), (ExtendedColorType::Rgba8, false) => (8, 24, ImageType::RawTrueColor), (ExtendedColorType::Rgb8, false) => (0, 24, ImageType::RawTrueColor), (ExtendedColorType::La8, false) => (8, 8, ImageType::RawGrayScale), (ExtendedColorType::L8, false) => (0, 8, ImageType::RawGrayScale), _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Tga.into(), UnsupportedErrorKind::Color(color_type), ), )) } }; header.image_type = image_type as u8; header.image_width = width; header.image_height = height; header.pixel_depth = num_alpha_bits + other_channel_bits; header.image_desc = num_alpha_bits & ALPHA_BIT_MASK; header.image_desc |= SCREEN_ORIGIN_BIT_MASK; // Upper left origin. } Ok(header) } /// Load the header with values from the reader. pub(crate) fn from_reader(r: &mut dyn Read) -> ImageResult { Ok(Self { id_length: r.read_u8()?, map_type: r.read_u8()?, image_type: r.read_u8()?, map_origin: r.read_u16::()?, map_length: r.read_u16::()?, map_entry_size: r.read_u8()?, x_origin: r.read_u16::()?, y_origin: r.read_u16::()?, image_width: r.read_u16::()?, image_height: r.read_u16::()?, pixel_depth: r.read_u8()?, image_desc: r.read_u8()?, }) } /// Write out the header values. pub(crate) fn write_to(&self, w: &mut dyn Write) -> ImageResult<()> { w.write_u8(self.id_length)?; w.write_u8(self.map_type)?; w.write_u8(self.image_type)?; w.write_u16::(self.map_origin)?; w.write_u16::(self.map_length)?; w.write_u8(self.map_entry_size)?; w.write_u16::(self.x_origin)?; w.write_u16::(self.y_origin)?; w.write_u16::(self.image_width)?; w.write_u16::(self.image_height)?; w.write_u8(self.pixel_depth)?; w.write_u8(self.image_desc)?; Ok(()) } } image-0.25.5/src/codecs/tga/mod.rs000064400000000000000000000005531046102023000147310ustar 00000000000000//! Decoding of TGA Images //! //! # Related Links //! /// A decoder for TGA images /// /// Currently this decoder does not support 8, 15 and 16 bit color images. pub use self::decoder::TgaDecoder; //TODO add 8, 15, 16 bit color support pub use self::encoder::TgaEncoder; mod decoder; mod encoder; mod header; image-0.25.5/src/codecs/tiff.rs000064400000000000000000000346051046102023000143340ustar 00000000000000//! Decoding and Encoding of TIFF Images //! //! TIFF (Tagged Image File Format) is a versatile image format that supports //! lossless and lossy compression. //! //! # Related Links //! * - The TIFF specification extern crate tiff; use std::io::{self, BufRead, Cursor, Read, Seek, Write}; use std::marker::PhantomData; use std::mem; use crate::color::{ColorType, ExtendedColorType}; use crate::error::{ DecodingError, EncodingError, ImageError, ImageResult, LimitError, LimitErrorKind, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{ImageDecoder, ImageEncoder, ImageFormat}; use crate::metadata::Orientation; /// Decoder for TIFF images. pub struct TiffDecoder where R: BufRead + Seek, { dimensions: (u32, u32), color_type: ColorType, original_color_type: ExtendedColorType, // We only use an Option here so we can call with_limits on the decoder without moving. inner: Option>, } impl TiffDecoder where R: BufRead + Seek, { /// Create a new `TiffDecoder`. pub fn new(r: R) -> Result, ImageError> { let mut inner = tiff::decoder::Decoder::new(r).map_err(ImageError::from_tiff_decode)?; let dimensions = inner.dimensions().map_err(ImageError::from_tiff_decode)?; let tiff_color_type = inner.colortype().map_err(ImageError::from_tiff_decode)?; match inner.find_tag_unsigned_vec::(tiff::tags::Tag::SampleFormat) { Ok(Some(sample_formats)) => { for format in sample_formats { check_sample_format(format)?; } } Ok(None) => { /* assume UInt format */ } Err(other) => return Err(ImageError::from_tiff_decode(other)), }; let color_type = match tiff_color_type { tiff::ColorType::Gray(8) => ColorType::L8, tiff::ColorType::Gray(16) => ColorType::L16, tiff::ColorType::GrayA(8) => ColorType::La8, tiff::ColorType::GrayA(16) => ColorType::La16, tiff::ColorType::RGB(8) => ColorType::Rgb8, tiff::ColorType::RGB(16) => ColorType::Rgb16, tiff::ColorType::RGBA(8) => ColorType::Rgba8, tiff::ColorType::RGBA(16) => ColorType::Rgba16, tiff::ColorType::CMYK(8) => ColorType::Rgb8, tiff::ColorType::Palette(n) | tiff::ColorType::Gray(n) => { return Err(err_unknown_color_type(n)) } tiff::ColorType::GrayA(n) => return Err(err_unknown_color_type(n.saturating_mul(2))), tiff::ColorType::RGB(n) => return Err(err_unknown_color_type(n.saturating_mul(3))), tiff::ColorType::YCbCr(n) => return Err(err_unknown_color_type(n.saturating_mul(3))), tiff::ColorType::RGBA(n) | tiff::ColorType::CMYK(n) => { return Err(err_unknown_color_type(n.saturating_mul(4))) } }; let original_color_type = match tiff_color_type { tiff::ColorType::CMYK(8) => ExtendedColorType::Cmyk8, _ => color_type.into(), }; Ok(TiffDecoder { dimensions, color_type, original_color_type, inner: Some(inner), }) } // The buffer can be larger for CMYK than the RGB output fn total_bytes_buffer(&self) -> u64 { let dimensions = self.dimensions(); let total_pixels = u64::from(dimensions.0) * u64::from(dimensions.1); let bytes_per_pixel = if self.original_color_type == ExtendedColorType::Cmyk8 { 16 } else { u64::from(self.color_type().bytes_per_pixel()) }; total_pixels.saturating_mul(bytes_per_pixel) } } fn check_sample_format(sample_format: u16) -> Result<(), ImageError> { match tiff::tags::SampleFormat::from_u16(sample_format) { Some(tiff::tags::SampleFormat::Uint) => Ok(()), Some(other) => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Tiff.into(), UnsupportedErrorKind::GenericFeature(format!( "Unhandled TIFF sample format {other:?}" )), ), )), None => Err(ImageError::Decoding(DecodingError::from_format_hint( ImageFormat::Tiff.into(), ))), } } fn err_unknown_color_type(value: u8) -> ImageError { ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormat::Tiff.into(), UnsupportedErrorKind::Color(ExtendedColorType::Unknown(value)), )) } impl ImageError { fn from_tiff_decode(err: tiff::TiffError) -> ImageError { match err { tiff::TiffError::IoError(err) => ImageError::IoError(err), err @ (tiff::TiffError::FormatError(_) | tiff::TiffError::IntSizeError | tiff::TiffError::UsageError(_)) => { ImageError::Decoding(DecodingError::new(ImageFormat::Tiff.into(), err)) } tiff::TiffError::UnsupportedError(desc) => { ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormat::Tiff.into(), UnsupportedErrorKind::GenericFeature(desc.to_string()), )) } tiff::TiffError::LimitsExceeded => { ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory)) } } } fn from_tiff_encode(err: tiff::TiffError) -> ImageError { match err { tiff::TiffError::IoError(err) => ImageError::IoError(err), err @ (tiff::TiffError::FormatError(_) | tiff::TiffError::IntSizeError | tiff::TiffError::UsageError(_)) => { ImageError::Encoding(EncodingError::new(ImageFormat::Tiff.into(), err)) } tiff::TiffError::UnsupportedError(desc) => { ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormat::Tiff.into(), UnsupportedErrorKind::GenericFeature(desc.to_string()), )) } tiff::TiffError::LimitsExceeded => { ImageError::Limits(LimitError::from_kind(LimitErrorKind::InsufficientMemory)) } } } } /// Wrapper struct around a `Cursor>` #[allow(dead_code)] #[deprecated] pub struct TiffReader(Cursor>, PhantomData); #[allow(deprecated)] impl Read for TiffReader { fn read(&mut self, buf: &mut [u8]) -> io::Result { self.0.read(buf) } fn read_to_end(&mut self, buf: &mut Vec) -> io::Result { if self.0.position() == 0 && buf.is_empty() { mem::swap(buf, self.0.get_mut()); Ok(buf.len()) } else { self.0.read_to_end(buf) } } } impl ImageDecoder for TiffDecoder { fn dimensions(&self) -> (u32, u32) { self.dimensions } fn color_type(&self) -> ColorType { self.color_type } fn original_color_type(&self) -> ExtendedColorType { self.original_color_type } fn icc_profile(&mut self) -> ImageResult>> { if let Some(decoder) = &mut self.inner { Ok(decoder.get_tag_u8_vec(tiff::tags::Tag::Unknown(34675)).ok()) } else { Ok(None) } } fn orientation(&mut self) -> ImageResult { if let Some(decoder) = &mut self.inner { Ok(decoder .find_tag(tiff::tags::Tag::Orientation) .map_err(ImageError::from_tiff_decode)? .and_then(|v| Orientation::from_exif(v.into_u16().ok()?.min(255) as u8)) .unwrap_or(Orientation::NoTransforms)) } else { Ok(Orientation::NoTransforms) } } fn set_limits(&mut self, limits: crate::Limits) -> ImageResult<()> { limits.check_support(&crate::LimitSupport::default())?; let (width, height) = self.dimensions(); limits.check_dimensions(width, height)?; let max_alloc = limits.max_alloc.unwrap_or(u64::MAX); let max_intermediate_alloc = max_alloc.saturating_sub(self.total_bytes_buffer()); let mut tiff_limits: tiff::decoder::Limits = Default::default(); tiff_limits.decoding_buffer_size = usize::try_from(max_alloc - max_intermediate_alloc).unwrap_or(usize::MAX); tiff_limits.intermediate_buffer_size = usize::try_from(max_intermediate_alloc).unwrap_or(usize::MAX); tiff_limits.ifd_value_size = tiff_limits.intermediate_buffer_size; self.inner = Some(self.inner.take().unwrap().with_limits(tiff_limits)); Ok(()) } fn read_image(self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); match self .inner .unwrap() .read_image() .map_err(ImageError::from_tiff_decode)? { tiff::decoder::DecodingResult::U8(v) if self.original_color_type == ExtendedColorType::Cmyk8 => { let mut out_cur = Cursor::new(buf); for cmyk in v.chunks_exact(4) { out_cur.write_all(&cmyk_to_rgb(cmyk))?; } } tiff::decoder::DecodingResult::U8(v) => { buf.copy_from_slice(&v); } tiff::decoder::DecodingResult::U16(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::U32(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::U64(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::I8(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::I16(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::I32(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::I64(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::F32(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } tiff::decoder::DecodingResult::F64(v) => { buf.copy_from_slice(bytemuck::cast_slice(&v)); } } Ok(()) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } /// Encoder for tiff images pub struct TiffEncoder { w: W, } fn cmyk_to_rgb(cmyk: &[u8]) -> [u8; 3] { let c = f32::from(cmyk[0]); let m = f32::from(cmyk[1]); let y = f32::from(cmyk[2]); let kf = 1. - f32::from(cmyk[3]) / 255.; [ ((255. - c) * kf) as u8, ((255. - m) * kf) as u8, ((255. - y) * kf) as u8, ] } // Utility to simplify and deduplicate error handling during 16-bit encoding. fn u8_slice_as_u16(buf: &[u8]) -> ImageResult<&[u16]> { bytemuck::try_cast_slice(buf).map_err(|err| { // If the buffer is not aligned or the correct length for a u16 slice, err. // // `bytemuck::PodCastError` of bytemuck-1.2.0 does not implement // `Error` and `Display` trait. // See . ImageError::Parameter(ParameterError::from_kind(ParameterErrorKind::Generic( format!("{err:?}"), ))) }) } impl TiffEncoder { /// Create a new encoder that writes its output to `w` pub fn new(w: W) -> TiffEncoder { TiffEncoder { w } } /// Encodes the image `image` that has dimensions `width` and `height` and `ColorType` `c`. /// /// 16-bit types assume the buffer is native endian. /// /// # Panics /// /// Panics if `width * height * color_type.bytes_per_pixel() != data.len()`. #[track_caller] pub fn encode( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); let mut encoder = tiff::encoder::TiffEncoder::new(self.w).map_err(ImageError::from_tiff_encode)?; match color_type { ExtendedColorType::L8 => { encoder.write_image::(width, height, buf) } ExtendedColorType::Rgb8 => { encoder.write_image::(width, height, buf) } ExtendedColorType::Rgba8 => { encoder.write_image::(width, height, buf) } ExtendedColorType::L16 => encoder.write_image::( width, height, u8_slice_as_u16(buf)?, ), ExtendedColorType::Rgb16 => encoder.write_image::( width, height, u8_slice_as_u16(buf)?, ), ExtendedColorType::Rgba16 => encoder.write_image::( width, height, u8_slice_as_u16(buf)?, ), _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::Tiff.into(), UnsupportedErrorKind::Color(color_type), ), )) } } .map_err(ImageError::from_tiff_encode)?; Ok(()) } } impl ImageEncoder for TiffEncoder { #[track_caller] fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { self.encode(buf, width, height, color_type) } } image-0.25.5/src/codecs/webp/decoder.rs000064400000000000000000000131031046102023000157340ustar 00000000000000use std::io::{BufRead, Read, Seek}; use crate::buffer::ConvertBuffer; use crate::error::{DecodingError, ImageError, ImageResult}; use crate::image::{ImageDecoder, ImageFormat}; use crate::metadata::Orientation; use crate::{AnimationDecoder, ColorType, Delay, Frame, Frames, RgbImage, Rgba, RgbaImage}; /// WebP Image format decoder. /// /// Supports both lossless and lossy WebP images. pub struct WebPDecoder { inner: image_webp::WebPDecoder, orientation: Option, } impl WebPDecoder { /// Create a new `WebPDecoder` from the Reader `r`. pub fn new(r: R) -> ImageResult { Ok(Self { inner: image_webp::WebPDecoder::new(r).map_err(ImageError::from_webp_decode)?, orientation: None, }) } /// Returns true if the image as described by the bitstream is animated. pub fn has_animation(&self) -> bool { self.inner.is_animated() } /// Sets the background color if the image is an extended and animated webp. pub fn set_background_color(&mut self, color: Rgba) -> ImageResult<()> { self.inner .set_background_color(color.0) .map_err(ImageError::from_webp_decode) } } impl ImageDecoder for WebPDecoder { fn dimensions(&self) -> (u32, u32) { self.inner.dimensions() } fn color_type(&self) -> ColorType { if self.inner.has_alpha() { ColorType::Rgba8 } else { ColorType::Rgb8 } } fn read_image(mut self, buf: &mut [u8]) -> ImageResult<()> { assert_eq!(u64::try_from(buf.len()), Ok(self.total_bytes())); self.inner .read_image(buf) .map_err(ImageError::from_webp_decode) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } fn icc_profile(&mut self) -> ImageResult>> { self.inner .icc_profile() .map_err(ImageError::from_webp_decode) } fn exif_metadata(&mut self) -> ImageResult>> { let exif = self .inner .exif_metadata() .map_err(ImageError::from_webp_decode)?; self.orientation = Some( exif.as_ref() .and_then(|exif| Orientation::from_exif_chunk(exif)) .unwrap_or(Orientation::NoTransforms), ); Ok(exif) } fn orientation(&mut self) -> ImageResult { // `exif_metadata` caches the orientation, so call it if `orientation` hasn't been set yet. if self.orientation.is_none() { let _ = self.exif_metadata()?; } Ok(self.orientation.unwrap()) } } impl<'a, R: 'a + BufRead + Seek> AnimationDecoder<'a> for WebPDecoder { fn into_frames(self) -> Frames<'a> { struct FramesInner { decoder: WebPDecoder, current: u32, } impl Iterator for FramesInner { type Item = ImageResult; fn next(&mut self) -> Option { if self.current == self.decoder.inner.num_frames() { return None; } self.current += 1; let (width, height) = self.decoder.inner.dimensions(); let (img, delay) = if self.decoder.inner.has_alpha() { let mut img = RgbaImage::new(width, height); match self.decoder.inner.read_frame(&mut img) { Ok(delay) => (img, delay), Err(image_webp::DecodingError::NoMoreFrames) => return None, Err(e) => return Some(Err(ImageError::from_webp_decode(e))), } } else { let mut img = RgbImage::new(width, height); match self.decoder.inner.read_frame(&mut img) { Ok(delay) => (img.convert(), delay), Err(image_webp::DecodingError::NoMoreFrames) => return None, Err(e) => return Some(Err(ImageError::from_webp_decode(e))), } }; Some(Ok(Frame::from_parts( img, 0, 0, Delay::from_numer_denom_ms(delay, 1), ))) } } Frames::new(Box::new(FramesInner { decoder: self, current: 0, })) } } impl ImageError { fn from_webp_decode(e: image_webp::DecodingError) -> Self { match e { image_webp::DecodingError::IoError(e) => ImageError::IoError(e), _ => ImageError::Decoding(DecodingError::new(ImageFormat::WebP.into(), e)), } } } #[cfg(test)] mod tests { use super::*; #[test] fn add_with_overflow_size() { let bytes = vec![ 0x52, 0x49, 0x46, 0x46, 0xaf, 0x37, 0x80, 0x47, 0x57, 0x45, 0x42, 0x50, 0x6c, 0x64, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xfb, 0x7e, 0x73, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x40, 0xfb, 0xff, 0xff, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x00, 0x00, 0x00, 0x00, 0x62, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x49, 0x49, 0x54, 0x55, 0x50, 0x4c, 0x54, 0x59, 0x50, 0x45, 0x33, 0x37, 0x44, 0x4d, 0x46, ]; let data = std::io::Cursor::new(bytes); let _ = WebPDecoder::new(data); } } image-0.25.5/src/codecs/webp/encoder.rs000064400000000000000000000101231046102023000157450ustar 00000000000000//! Encoding of WebP images. use std::io::Write; use crate::error::{EncodingError, UnsupportedError, UnsupportedErrorKind}; use crate::{ExtendedColorType, ImageEncoder, ImageError, ImageFormat, ImageResult}; /// WebP Encoder. /// /// ### Limitations /// /// Right now only **lossless** encoding is supported. /// /// If you need **lossy** encoding, you'll have to use `libwebp`. /// Example code for encoding a [`DynamicImage`](crate::DynamicImage) with `libwebp` /// via the [`webp`](https://docs.rs/webp/latest/webp/) crate can be found /// [here](https://github.com/jaredforth/webp/blob/main/examples/convert.rs). /// /// ### Compression ratio /// /// This encoder reaches compression ratios higher than PNG at a fraction of the encoding time. /// However, it does not reach the full potential of lossless WebP for reducing file size. /// /// If you need an even higher compression ratio at the cost of much slower encoding, /// please encode the image with `libwebp` as outlined above. pub struct WebPEncoder { inner: image_webp::WebPEncoder, } impl WebPEncoder { /// Create a new encoder that writes its output to `w`. /// /// Uses "VP8L" lossless encoding. pub fn new_lossless(w: W) -> Self { Self { inner: image_webp::WebPEncoder::new(w), } } /// Encode image data with the indicated color type. /// /// The encoder requires image data be Rgb8 or Rgba8. /// /// # Panics /// /// Panics if `width * height * color.bytes_per_pixel() != data.len()`. #[track_caller] pub fn encode( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { let expected_buffer_len = color_type.buffer_size(width, height); assert_eq!( expected_buffer_len, buf.len() as u64, "Invalid buffer length: expected {expected_buffer_len} got {} for {width}x{height} image", buf.len(), ); let color_type = match color_type { ExtendedColorType::L8 => image_webp::ColorType::L8, ExtendedColorType::La8 => image_webp::ColorType::La8, ExtendedColorType::Rgb8 => image_webp::ColorType::Rgb8, ExtendedColorType::Rgba8 => image_webp::ColorType::Rgba8, _ => { return Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormat::WebP.into(), UnsupportedErrorKind::Color(color_type), ), )) } }; self.inner .encode(buf, width, height, color_type) .map_err(ImageError::from_webp_encode) } } impl ImageEncoder for WebPEncoder { #[track_caller] fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()> { self.encode(buf, width, height, color_type) } fn set_icc_profile(&mut self, icc_profile: Vec) -> Result<(), UnsupportedError> { self.inner.set_icc_profile(icc_profile); Ok(()) } } impl ImageError { fn from_webp_encode(e: image_webp::EncodingError) -> Self { match e { image_webp::EncodingError::IoError(e) => ImageError::IoError(e), _ => ImageError::Encoding(EncodingError::new(ImageFormat::WebP.into(), e)), } } } #[cfg(test)] mod tests { use crate::{ImageEncoder, RgbaImage}; #[test] fn write_webp() { let img = RgbaImage::from_raw(10, 6, (0..240).collect()).unwrap(); let mut output = Vec::new(); super::WebPEncoder::new_lossless(&mut output) .write_image( img.inner_pixels(), img.width(), img.height(), crate::ExtendedColorType::Rgba8, ) .unwrap(); let img2 = crate::load_from_memory_with_format(&output, crate::ImageFormat::WebP) .unwrap() .to_rgba8(); assert_eq!(img, img2); } } image-0.25.5/src/codecs/webp/mod.rs000064400000000000000000000002151046102023000151060ustar 00000000000000//! Decoding and Encoding of WebP Images mod decoder; mod encoder; pub use self::decoder::WebPDecoder; pub use self::encoder::WebPEncoder; image-0.25.5/src/color.rs000064400000000000000000000737111046102023000132630ustar 00000000000000use std::ops::{Index, IndexMut}; use num_traits::{NumCast, ToPrimitive, Zero}; use crate::traits::{Enlargeable, Pixel, Primitive}; /// An enumeration over supported color types and bit depths #[derive(Copy, PartialEq, Eq, Debug, Clone, Hash)] #[non_exhaustive] pub enum ColorType { /// Pixel is 8-bit luminance L8, /// Pixel is 8-bit luminance with an alpha channel La8, /// Pixel contains 8-bit R, G and B channels Rgb8, /// Pixel is 8-bit RGB with an alpha channel Rgba8, /// Pixel is 16-bit luminance L16, /// Pixel is 16-bit luminance with an alpha channel La16, /// Pixel is 16-bit RGB Rgb16, /// Pixel is 16-bit RGBA Rgba16, /// Pixel is 32-bit float RGB Rgb32F, /// Pixel is 32-bit float RGBA Rgba32F, } impl ColorType { /// Returns the number of bytes contained in a pixel of `ColorType` ```c``` #[must_use] pub fn bytes_per_pixel(self) -> u8 { match self { ColorType::L8 => 1, ColorType::L16 | ColorType::La8 => 2, ColorType::Rgb8 => 3, ColorType::Rgba8 | ColorType::La16 => 4, ColorType::Rgb16 => 6, ColorType::Rgba16 => 8, ColorType::Rgb32F => 3 * 4, ColorType::Rgba32F => 4 * 4, } } /// Returns if there is an alpha channel. #[must_use] pub fn has_alpha(self) -> bool { use ColorType::*; match self { L8 | L16 | Rgb8 | Rgb16 | Rgb32F => false, La8 | Rgba8 | La16 | Rgba16 | Rgba32F => true, } } /// Returns false if the color scheme is grayscale, true otherwise. #[must_use] pub fn has_color(self) -> bool { use ColorType::*; match self { L8 | L16 | La8 | La16 => false, Rgb8 | Rgb16 | Rgba8 | Rgba16 | Rgb32F | Rgba32F => true, } } /// Returns the number of bits contained in a pixel of `ColorType` ```c``` (which will always be /// a multiple of 8). #[must_use] pub fn bits_per_pixel(self) -> u16 { >::from(self.bytes_per_pixel()) * 8 } /// Returns the number of color channels that make up this pixel #[must_use] pub fn channel_count(self) -> u8 { let e: ExtendedColorType = self.into(); e.channel_count() } } /// An enumeration of color types encountered in image formats. /// /// This is not exhaustive over all existing image formats but should be granular enough to allow /// round tripping of decoding and encoding as much as possible. The variants will be extended as /// necessary to enable this. /// /// Another purpose is to advise users of a rough estimate of the accuracy and effort of the /// decoding from and encoding to such an image format. #[derive(Copy, PartialEq, Eq, Debug, Clone, Hash)] #[non_exhaustive] pub enum ExtendedColorType { /// Pixel is 8-bit alpha A8, /// Pixel is 1-bit luminance L1, /// Pixel is 1-bit luminance with an alpha channel La1, /// Pixel contains 1-bit R, G and B channels Rgb1, /// Pixel is 1-bit RGB with an alpha channel Rgba1, /// Pixel is 2-bit luminance L2, /// Pixel is 2-bit luminance with an alpha channel La2, /// Pixel contains 2-bit R, G and B channels Rgb2, /// Pixel is 2-bit RGB with an alpha channel Rgba2, /// Pixel is 4-bit luminance L4, /// Pixel is 4-bit luminance with an alpha channel La4, /// Pixel contains 4-bit R, G and B channels Rgb4, /// Pixel is 4-bit RGB with an alpha channel Rgba4, /// Pixel is 8-bit luminance L8, /// Pixel is 8-bit luminance with an alpha channel La8, /// Pixel contains 8-bit R, G and B channels Rgb8, /// Pixel is 8-bit RGB with an alpha channel Rgba8, /// Pixel is 16-bit luminance L16, /// Pixel is 16-bit luminance with an alpha channel La16, /// Pixel contains 16-bit R, G and B channels Rgb16, /// Pixel is 16-bit RGB with an alpha channel Rgba16, /// Pixel contains 8-bit B, G and R channels Bgr8, /// Pixel is 8-bit BGR with an alpha channel Bgra8, // TODO f16 types? /// Pixel is 32-bit float RGB Rgb32F, /// Pixel is 32-bit float RGBA Rgba32F, /// Pixel is 8-bit CMYK Cmyk8, /// Pixel is of unknown color type with the specified bits per pixel. This can apply to pixels /// which are associated with an external palette. In that case, the pixel value is an index /// into the palette. Unknown(u8), } impl ExtendedColorType { /// Get the number of channels for colors of this type. /// /// Note that the `Unknown` variant returns a value of `1` since pixels can only be treated as /// an opaque datum by the library. #[must_use] pub fn channel_count(self) -> u8 { match self { ExtendedColorType::A8 | ExtendedColorType::L1 | ExtendedColorType::L2 | ExtendedColorType::L4 | ExtendedColorType::L8 | ExtendedColorType::L16 | ExtendedColorType::Unknown(_) => 1, ExtendedColorType::La1 | ExtendedColorType::La2 | ExtendedColorType::La4 | ExtendedColorType::La8 | ExtendedColorType::La16 => 2, ExtendedColorType::Rgb1 | ExtendedColorType::Rgb2 | ExtendedColorType::Rgb4 | ExtendedColorType::Rgb8 | ExtendedColorType::Rgb16 | ExtendedColorType::Rgb32F | ExtendedColorType::Bgr8 => 3, ExtendedColorType::Rgba1 | ExtendedColorType::Rgba2 | ExtendedColorType::Rgba4 | ExtendedColorType::Rgba8 | ExtendedColorType::Rgba16 | ExtendedColorType::Rgba32F | ExtendedColorType::Bgra8 | ExtendedColorType::Cmyk8 => 4, } } /// Returns the number of bits per pixel for this color type. #[must_use] pub fn bits_per_pixel(&self) -> u16 { match *self { ExtendedColorType::A8 => 8, ExtendedColorType::L1 => 1, ExtendedColorType::La1 => 2, ExtendedColorType::Rgb1 => 3, ExtendedColorType::Rgba1 => 4, ExtendedColorType::L2 => 2, ExtendedColorType::La2 => 4, ExtendedColorType::Rgb2 => 6, ExtendedColorType::Rgba2 => 8, ExtendedColorType::L4 => 4, ExtendedColorType::La4 => 8, ExtendedColorType::Rgb4 => 12, ExtendedColorType::Rgba4 => 16, ExtendedColorType::L8 => 8, ExtendedColorType::La8 => 16, ExtendedColorType::Rgb8 => 24, ExtendedColorType::Rgba8 => 32, ExtendedColorType::L16 => 16, ExtendedColorType::La16 => 32, ExtendedColorType::Rgb16 => 48, ExtendedColorType::Rgba16 => 64, ExtendedColorType::Rgb32F => 96, ExtendedColorType::Rgba32F => 128, ExtendedColorType::Bgr8 => 24, ExtendedColorType::Bgra8 => 32, ExtendedColorType::Cmyk8 => 32, ExtendedColorType::Unknown(bpp) => bpp as u16, } } /// Returns the number of bytes required to hold a width x height image of this color type. pub(crate) fn buffer_size(self, width: u32, height: u32) -> u64 { let bpp = self.bits_per_pixel() as u64; let row_pitch = (width as u64 * bpp + 7) / 8; row_pitch.saturating_mul(height as u64) } } impl From for ExtendedColorType { fn from(c: ColorType) -> Self { match c { ColorType::L8 => ExtendedColorType::L8, ColorType::La8 => ExtendedColorType::La8, ColorType::Rgb8 => ExtendedColorType::Rgb8, ColorType::Rgba8 => ExtendedColorType::Rgba8, ColorType::L16 => ExtendedColorType::L16, ColorType::La16 => ExtendedColorType::La16, ColorType::Rgb16 => ExtendedColorType::Rgb16, ColorType::Rgba16 => ExtendedColorType::Rgba16, ColorType::Rgb32F => ExtendedColorType::Rgb32F, ColorType::Rgba32F => ExtendedColorType::Rgba32F, } } } macro_rules! define_colors { {$( $(#[$doc:meta])* pub struct $ident:ident([T; $channels:expr, $alphas:expr]) = $interpretation:literal; )*} => { $( // START Structure definitions $(#[$doc])* #[derive(PartialEq, Eq, Clone, Debug, Copy, Hash)] #[repr(transparent)] #[allow(missing_docs)] pub struct $ident (pub [T; $channels]); impl Pixel for $ident { type Subpixel = T; const CHANNEL_COUNT: u8 = $channels; #[inline(always)] fn channels(&self) -> &[T] { &self.0 } #[inline(always)] fn channels_mut(&mut self) -> &mut [T] { &mut self.0 } const COLOR_MODEL: &'static str = $interpretation; fn channels4(&self) -> (T, T, T, T) { const CHANNELS: usize = $channels; let mut channels = [T::DEFAULT_MAX_VALUE; 4]; channels[0..CHANNELS].copy_from_slice(&self.0); (channels[0], channels[1], channels[2], channels[3]) } fn from_channels(a: T, b: T, c: T, d: T,) -> $ident { const CHANNELS: usize = $channels; *<$ident as Pixel>::from_slice(&[a, b, c, d][..CHANNELS]) } fn from_slice(slice: &[T]) -> &$ident { assert_eq!(slice.len(), $channels); unsafe { &*(slice.as_ptr() as *const $ident) } } fn from_slice_mut(slice: &mut [T]) -> &mut $ident { assert_eq!(slice.len(), $channels); unsafe { &mut *(slice.as_mut_ptr() as *mut $ident) } } fn to_rgb(&self) -> Rgb { let mut pix = Rgb([Zero::zero(), Zero::zero(), Zero::zero()]); pix.from_color(self); pix } fn to_rgba(&self) -> Rgba { let mut pix = Rgba([Zero::zero(), Zero::zero(), Zero::zero(), Zero::zero()]); pix.from_color(self); pix } fn to_luma(&self) -> Luma { let mut pix = Luma([Zero::zero()]); pix.from_color(self); pix } fn to_luma_alpha(&self) -> LumaA { let mut pix = LumaA([Zero::zero(), Zero::zero()]); pix.from_color(self); pix } fn map(& self, f: F) -> $ident where F: FnMut(T) -> T { let mut this = (*self).clone(); this.apply(f); this } fn apply(&mut self, mut f: F) where F: FnMut(T) -> T { for v in &mut self.0 { *v = f(*v) } } fn map_with_alpha(&self, f: F, g: G) -> $ident where F: FnMut(T) -> T, G: FnMut(T) -> T { let mut this = (*self).clone(); this.apply_with_alpha(f, g); this } fn apply_with_alpha(&mut self, mut f: F, mut g: G) where F: FnMut(T) -> T, G: FnMut(T) -> T { const ALPHA: usize = $channels - $alphas; for v in self.0[..ALPHA].iter_mut() { *v = f(*v) } // The branch of this match is `const`. This way ensures that no subexpression fails the // `const_err` lint (the expression `self.0[ALPHA]` would). if let Some(v) = self.0.get_mut(ALPHA) { *v = g(*v) } } fn map2(&self, other: &Self, f: F) -> $ident where F: FnMut(T, T) -> T { let mut this = (*self).clone(); this.apply2(other, f); this } fn apply2(&mut self, other: &$ident, mut f: F) where F: FnMut(T, T) -> T { for (a, &b) in self.0.iter_mut().zip(other.0.iter()) { *a = f(*a, b) } } fn invert(&mut self) { Invert::invert(self) } fn blend(&mut self, other: &$ident) { Blend::blend(self, other) } } impl Index for $ident { type Output = T; #[inline(always)] fn index(&self, _index: usize) -> &T { &self.0[_index] } } impl IndexMut for $ident { #[inline(always)] fn index_mut(&mut self, _index: usize) -> &mut T { &mut self.0[_index] } } impl From<[T; $channels]> for $ident { fn from(c: [T; $channels]) -> Self { Self(c) } } )* // END Structure definitions } } define_colors! { /// RGB colors. /// /// For the purpose of color conversion, as well as blending, the implementation of `Pixel` /// assumes an `sRGB` color space of its data. pub struct Rgb([T; 3, 0]) = "RGB"; /// Grayscale colors. pub struct Luma([T; 1, 0]) = "Y"; /// RGB colors + alpha channel pub struct Rgba([T; 4, 1]) = "RGBA"; /// Grayscale colors + alpha channel pub struct LumaA([T; 2, 1]) = "YA"; } /// Convert from one pixel component type to another. For example, convert from `u8` to `f32` pixel values. pub trait FromPrimitive { /// Converts from any pixel component type to this type. fn from_primitive(component: Component) -> Self; } impl FromPrimitive for T { fn from_primitive(sample: T) -> Self { sample } } // from f32: // Note that in to-integer-conversion we are performing rounding but NumCast::from is implemented // as truncate towards zero. We emulate rounding by adding a bias. impl FromPrimitive for u8 { fn from_primitive(float: f32) -> Self { let inner = (float.clamp(0.0, 1.0) * u8::MAX as f32).round(); NumCast::from(inner).unwrap() } } impl FromPrimitive for u16 { fn from_primitive(float: f32) -> Self { let inner = (float.clamp(0.0, 1.0) * u16::MAX as f32).round(); NumCast::from(inner).unwrap() } } // from u16: impl FromPrimitive for u8 { fn from_primitive(c16: u16) -> Self { fn from(c: impl Into) -> u32 { c.into() } // The input c is the numerator of `c / u16::MAX`. // Derive numerator of `num / u8::MAX`, with rounding. // // This method is based on the inverse (see FromPrimitive for u16) and was tested // exhaustively in Python. It's the same as the reference function: // round(c * (2**8 - 1) / (2**16 - 1)) NumCast::from((from(c16) + 128) / 257).unwrap() } } impl FromPrimitive for f32 { fn from_primitive(int: u16) -> Self { (int as f32 / u16::MAX as f32).clamp(0.0, 1.0) } } // from u8: impl FromPrimitive for f32 { fn from_primitive(int: u8) -> Self { (int as f32 / u8::MAX as f32).clamp(0.0, 1.0) } } impl FromPrimitive for u16 { fn from_primitive(c8: u8) -> Self { let x = c8.to_u64().unwrap(); NumCast::from((x << 8) | x).unwrap() } } /// Provides color conversions for the different pixel types. pub trait FromColor { /// Changes `self` to represent `Other` in the color space of `Self` #[allow(clippy::wrong_self_convention)] fn from_color(&mut self, _: &Other); } /// Copy-based conversions to target pixel types using `FromColor`. // FIXME: this trait should be removed and replaced with real color space models // rather than assuming sRGB. pub(crate) trait IntoColor { /// Constructs a pixel of the target type and converts this pixel into it. #[allow(clippy::wrong_self_convention)] fn into_color(&self) -> Other; } impl IntoColor for S where O: Pixel + FromColor, { #[allow(clippy::wrong_self_convention)] fn into_color(&self) -> O { // Note we cannot use Pixel::CHANNELS_COUNT here to directly construct // the pixel due to a current bug/limitation of consts. #[allow(deprecated)] let mut pix = O::from_channels(Zero::zero(), Zero::zero(), Zero::zero(), Zero::zero()); pix.from_color(self); pix } } /// Coefficients to transform from sRGB to a CIE Y (luminance) value. const SRGB_LUMA: [u32; 3] = [2126, 7152, 722]; const SRGB_LUMA_DIV: u32 = 10000; #[inline] fn rgb_to_luma(rgb: &[T]) -> T { let l = ::from(SRGB_LUMA[0]).unwrap() * rgb[0].to_larger() + ::from(SRGB_LUMA[1]).unwrap() * rgb[1].to_larger() + ::from(SRGB_LUMA[2]).unwrap() * rgb[2].to_larger(); T::clamp_from(l / ::from(SRGB_LUMA_DIV).unwrap()) } // `FromColor` for Luma impl FromColor> for Luma where T: FromPrimitive, { fn from_color(&mut self, other: &Luma) { let own = self.channels_mut(); let other = other.channels(); own[0] = T::from_primitive(other[0]); } } impl FromColor> for Luma where T: FromPrimitive, { fn from_color(&mut self, other: &LumaA) { self.channels_mut()[0] = T::from_primitive(other.channels()[0]); } } impl FromColor> for Luma where T: FromPrimitive, { fn from_color(&mut self, other: &Rgb) { let gray = self.channels_mut(); let rgb = other.channels(); gray[0] = T::from_primitive(rgb_to_luma(rgb)); } } impl FromColor> for Luma where T: FromPrimitive, { fn from_color(&mut self, other: &Rgba) { let gray = self.channels_mut(); let rgb = other.channels(); let l = rgb_to_luma(rgb); gray[0] = T::from_primitive(l); } } // `FromColor` for LumaA impl FromColor> for LumaA where T: FromPrimitive, { fn from_color(&mut self, other: &LumaA) { let own = self.channels_mut(); let other = other.channels(); own[0] = T::from_primitive(other[0]); own[1] = T::from_primitive(other[1]); } } impl FromColor> for LumaA where T: FromPrimitive, { fn from_color(&mut self, other: &Rgb) { let gray_a = self.channels_mut(); let rgb = other.channels(); gray_a[0] = T::from_primitive(rgb_to_luma(rgb)); gray_a[1] = T::DEFAULT_MAX_VALUE; } } impl FromColor> for LumaA where T: FromPrimitive, { fn from_color(&mut self, other: &Rgba) { let gray_a = self.channels_mut(); let rgba = other.channels(); gray_a[0] = T::from_primitive(rgb_to_luma(rgba)); gray_a[1] = T::from_primitive(rgba[3]); } } impl FromColor> for LumaA where T: FromPrimitive, { fn from_color(&mut self, other: &Luma) { let gray_a = self.channels_mut(); gray_a[0] = T::from_primitive(other.channels()[0]); gray_a[1] = T::DEFAULT_MAX_VALUE; } } // `FromColor` for RGBA impl FromColor> for Rgba where T: FromPrimitive, { fn from_color(&mut self, other: &Rgba) { let own = &mut self.0; let other = &other.0; own[0] = T::from_primitive(other[0]); own[1] = T::from_primitive(other[1]); own[2] = T::from_primitive(other[2]); own[3] = T::from_primitive(other[3]); } } impl FromColor> for Rgba where T: FromPrimitive, { fn from_color(&mut self, other: &Rgb) { let rgba = &mut self.0; let rgb = &other.0; rgba[0] = T::from_primitive(rgb[0]); rgba[1] = T::from_primitive(rgb[1]); rgba[2] = T::from_primitive(rgb[2]); rgba[3] = T::DEFAULT_MAX_VALUE; } } impl FromColor> for Rgba where T: FromPrimitive, { fn from_color(&mut self, gray: &LumaA) { let rgba = &mut self.0; let gray = &gray.0; rgba[0] = T::from_primitive(gray[0]); rgba[1] = T::from_primitive(gray[0]); rgba[2] = T::from_primitive(gray[0]); rgba[3] = T::from_primitive(gray[1]); } } impl FromColor> for Rgba where T: FromPrimitive, { fn from_color(&mut self, gray: &Luma) { let rgba = &mut self.0; let gray = gray.0[0]; rgba[0] = T::from_primitive(gray); rgba[1] = T::from_primitive(gray); rgba[2] = T::from_primitive(gray); rgba[3] = T::DEFAULT_MAX_VALUE; } } // `FromColor` for RGB impl FromColor> for Rgb where T: FromPrimitive, { fn from_color(&mut self, other: &Rgb) { let own = &mut self.0; let other = &other.0; own[0] = T::from_primitive(other[0]); own[1] = T::from_primitive(other[1]); own[2] = T::from_primitive(other[2]); } } impl FromColor> for Rgb where T: FromPrimitive, { fn from_color(&mut self, other: &Rgba) { let rgb = &mut self.0; let rgba = &other.0; rgb[0] = T::from_primitive(rgba[0]); rgb[1] = T::from_primitive(rgba[1]); rgb[2] = T::from_primitive(rgba[2]); } } impl FromColor> for Rgb where T: FromPrimitive, { fn from_color(&mut self, other: &LumaA) { let rgb = &mut self.0; let gray = other.0[0]; rgb[0] = T::from_primitive(gray); rgb[1] = T::from_primitive(gray); rgb[2] = T::from_primitive(gray); } } impl FromColor> for Rgb where T: FromPrimitive, { fn from_color(&mut self, other: &Luma) { let rgb = &mut self.0; let gray = other.0[0]; rgb[0] = T::from_primitive(gray); rgb[1] = T::from_primitive(gray); rgb[2] = T::from_primitive(gray); } } /// Blends a color inter another one pub(crate) trait Blend { /// Blends a color in-place. fn blend(&mut self, other: &Self); } impl Blend for LumaA { fn blend(&mut self, other: &LumaA) { let max_t = T::DEFAULT_MAX_VALUE; let max_t = max_t.to_f32().unwrap(); let (bg_luma, bg_a) = (self.0[0], self.0[1]); let (fg_luma, fg_a) = (other.0[0], other.0[1]); let (bg_luma, bg_a) = ( bg_luma.to_f32().unwrap() / max_t, bg_a.to_f32().unwrap() / max_t, ); let (fg_luma, fg_a) = ( fg_luma.to_f32().unwrap() / max_t, fg_a.to_f32().unwrap() / max_t, ); let alpha_final = bg_a + fg_a - bg_a * fg_a; if alpha_final == 0.0 { return; }; let bg_luma_a = bg_luma * bg_a; let fg_luma_a = fg_luma * fg_a; let out_luma_a = fg_luma_a + bg_luma_a * (1.0 - fg_a); let out_luma = out_luma_a / alpha_final; *self = LumaA([ NumCast::from(max_t * out_luma).unwrap(), NumCast::from(max_t * alpha_final).unwrap(), ]); } } impl Blend for Luma { fn blend(&mut self, other: &Luma) { *self = *other; } } impl Blend for Rgba { fn blend(&mut self, other: &Rgba) { // http://stackoverflow.com/questions/7438263/alpha-compositing-algorithm-blend-modes#answer-11163848 if other.0[3].is_zero() { return; } if other.0[3] == T::DEFAULT_MAX_VALUE { *self = *other; return; } // First, as we don't know what type our pixel is, we have to convert to floats between 0.0 and 1.0 let max_t = T::DEFAULT_MAX_VALUE; let max_t = max_t.to_f32().unwrap(); let (bg_r, bg_g, bg_b, bg_a) = (self.0[0], self.0[1], self.0[2], self.0[3]); let (fg_r, fg_g, fg_b, fg_a) = (other.0[0], other.0[1], other.0[2], other.0[3]); let (bg_r, bg_g, bg_b, bg_a) = ( bg_r.to_f32().unwrap() / max_t, bg_g.to_f32().unwrap() / max_t, bg_b.to_f32().unwrap() / max_t, bg_a.to_f32().unwrap() / max_t, ); let (fg_r, fg_g, fg_b, fg_a) = ( fg_r.to_f32().unwrap() / max_t, fg_g.to_f32().unwrap() / max_t, fg_b.to_f32().unwrap() / max_t, fg_a.to_f32().unwrap() / max_t, ); // Work out what the final alpha level will be let alpha_final = bg_a + fg_a - bg_a * fg_a; if alpha_final == 0.0 { return; }; // We premultiply our channels by their alpha, as this makes it easier to calculate let (bg_r_a, bg_g_a, bg_b_a) = (bg_r * bg_a, bg_g * bg_a, bg_b * bg_a); let (fg_r_a, fg_g_a, fg_b_a) = (fg_r * fg_a, fg_g * fg_a, fg_b * fg_a); // Standard formula for src-over alpha compositing let (out_r_a, out_g_a, out_b_a) = ( fg_r_a + bg_r_a * (1.0 - fg_a), fg_g_a + bg_g_a * (1.0 - fg_a), fg_b_a + bg_b_a * (1.0 - fg_a), ); // Unmultiply the channels by our resultant alpha channel let (out_r, out_g, out_b) = ( out_r_a / alpha_final, out_g_a / alpha_final, out_b_a / alpha_final, ); // Cast back to our initial type on return *self = Rgba([ NumCast::from(max_t * out_r).unwrap(), NumCast::from(max_t * out_g).unwrap(), NumCast::from(max_t * out_b).unwrap(), NumCast::from(max_t * alpha_final).unwrap(), ]); } } impl Blend for Rgb { fn blend(&mut self, other: &Rgb) { *self = *other; } } /// Invert a color pub(crate) trait Invert { /// Inverts a color in-place. fn invert(&mut self); } impl Invert for LumaA { fn invert(&mut self) { let l = self.0; let max = T::DEFAULT_MAX_VALUE; *self = LumaA([max - l[0], l[1]]); } } impl Invert for Luma { fn invert(&mut self) { let l = self.0; let max = T::DEFAULT_MAX_VALUE; let l1 = max - l[0]; *self = Luma([l1]); } } impl Invert for Rgba { fn invert(&mut self) { let rgba = self.0; let max = T::DEFAULT_MAX_VALUE; *self = Rgba([max - rgba[0], max - rgba[1], max - rgba[2], rgba[3]]); } } impl Invert for Rgb { fn invert(&mut self) { let rgb = self.0; let max = T::DEFAULT_MAX_VALUE; let r1 = max - rgb[0]; let g1 = max - rgb[1]; let b1 = max - rgb[2]; *self = Rgb([r1, g1, b1]); } } #[cfg(test)] mod tests { use super::{Luma, LumaA, Pixel, Rgb, Rgba}; #[test] fn test_apply_with_alpha_rgba() { let mut rgba = Rgba([0, 0, 0, 0]); rgba.apply_with_alpha(|s| s, |_| 0xFF); assert_eq!(rgba, Rgba([0, 0, 0, 0xFF])); } #[test] fn test_apply_with_alpha_rgb() { let mut rgb = Rgb([0, 0, 0]); rgb.apply_with_alpha(|s| s, |_| panic!("bug")); assert_eq!(rgb, Rgb([0, 0, 0])); } #[test] fn test_map_with_alpha_rgba() { let rgba = Rgba([0, 0, 0, 0]).map_with_alpha(|s| s, |_| 0xFF); assert_eq!(rgba, Rgba([0, 0, 0, 0xFF])); } #[test] fn test_map_with_alpha_rgb() { let rgb = Rgb([0, 0, 0]).map_with_alpha(|s| s, |_| panic!("bug")); assert_eq!(rgb, Rgb([0, 0, 0])); } #[test] fn test_blend_luma_alpha() { let a = &mut LumaA([255_u8, 255]); let b = LumaA([255_u8, 255]); a.blend(&b); assert_eq!(a.0[0], 255); assert_eq!(a.0[1], 255); let a = &mut LumaA([255_u8, 0]); let b = LumaA([255_u8, 255]); a.blend(&b); assert_eq!(a.0[0], 255); assert_eq!(a.0[1], 255); let a = &mut LumaA([255_u8, 255]); let b = LumaA([255_u8, 0]); a.blend(&b); assert_eq!(a.0[0], 255); assert_eq!(a.0[1], 255); let a = &mut LumaA([255_u8, 0]); let b = LumaA([255_u8, 0]); a.blend(&b); assert_eq!(a.0[0], 255); assert_eq!(a.0[1], 0); } #[test] fn test_blend_rgba() { let a = &mut Rgba([255_u8, 255, 255, 255]); let b = Rgba([255_u8, 255, 255, 255]); a.blend(&b); assert_eq!(a.0, [255, 255, 255, 255]); let a = &mut Rgba([255_u8, 255, 255, 0]); let b = Rgba([255_u8, 255, 255, 255]); a.blend(&b); assert_eq!(a.0, [255, 255, 255, 255]); let a = &mut Rgba([255_u8, 255, 255, 255]); let b = Rgba([255_u8, 255, 255, 0]); a.blend(&b); assert_eq!(a.0, [255, 255, 255, 255]); let a = &mut Rgba([255_u8, 255, 255, 0]); let b = Rgba([255_u8, 255, 255, 0]); a.blend(&b); assert_eq!(a.0, [255, 255, 255, 0]); } #[test] fn test_apply_without_alpha_rgba() { let mut rgba = Rgba([0, 0, 0, 0]); rgba.apply_without_alpha(|s| s + 1); assert_eq!(rgba, Rgba([1, 1, 1, 0])); } #[test] fn test_apply_without_alpha_rgb() { let mut rgb = Rgb([0, 0, 0]); rgb.apply_without_alpha(|s| s + 1); assert_eq!(rgb, Rgb([1, 1, 1])); } #[test] fn test_map_without_alpha_rgba() { let rgba = Rgba([0, 0, 0, 0]).map_without_alpha(|s| s + 1); assert_eq!(rgba, Rgba([1, 1, 1, 0])); } #[test] fn test_map_without_alpha_rgb() { let rgb = Rgb([0, 0, 0]).map_without_alpha(|s| s + 1); assert_eq!(rgb, Rgb([1, 1, 1])); } macro_rules! test_lossless_conversion { ($a:ty, $b:ty, $c:ty) => { let a: $a = [<$a as Pixel>::Subpixel::DEFAULT_MAX_VALUE >> 2; <$a as Pixel>::CHANNEL_COUNT as usize] .into(); let b: $b = a.into_color(); let c: $c = b.into_color(); assert_eq!(a.channels(), c.channels()); }; } #[test] fn test_lossless_conversions() { use super::IntoColor; use crate::traits::Primitive; test_lossless_conversion!(Luma, Luma, Luma); test_lossless_conversion!(LumaA, LumaA, LumaA); test_lossless_conversion!(Rgb, Rgb, Rgb); test_lossless_conversion!(Rgba, Rgba, Rgba); } #[test] fn accuracy_conversion() { use super::{Luma, Pixel, Rgb}; let pixel = Rgb::from([13, 13, 13]); let Luma([luma]) = pixel.to_luma(); assert_eq!(luma, 13); } } image-0.25.5/src/dynimage.rs000064400000000000000000001464251046102023000137450ustar 00000000000000use std::io::{self, Seek, Write}; use std::path::Path; #[cfg(feature = "gif")] use crate::codecs::gif; #[cfg(feature = "png")] use crate::codecs::png; use crate::buffer_::{ ConvertBuffer, Gray16Image, GrayAlpha16Image, GrayAlphaImage, GrayImage, ImageBuffer, Rgb16Image, RgbImage, Rgba16Image, RgbaImage, }; use crate::color::{self, IntoColor}; use crate::error::{ImageError, ImageResult, ParameterError, ParameterErrorKind}; use crate::flat::FlatSamples; use crate::image::{GenericImage, GenericImageView, ImageDecoder, ImageEncoder, ImageFormat}; use crate::image_reader::free_functions; use crate::math::resize_dimensions; use crate::metadata::Orientation; use crate::traits::Pixel; use crate::ImageReader; use crate::{image, Luma, LumaA}; use crate::{imageops, ExtendedColorType}; use crate::{Rgb32FImage, Rgba32FImage}; /// A Dynamic Image /// /// This represents a _matrix_ of _pixels_ which are _convertible_ from and to an _RGBA_ /// representation. More variants that adhere to these principles may get added in the future, in /// particular to cover other combinations typically used. /// /// # Usage /// /// This type can act as a converter between specific `ImageBuffer` instances. /// /// ``` /// use image::{DynamicImage, GrayImage, RgbImage}; /// /// let rgb: RgbImage = RgbImage::new(10, 10); /// let luma: GrayImage = DynamicImage::ImageRgb8(rgb).into_luma8(); /// ``` /// /// # Design /// /// There is no goal to provide an all-encompassing type with all possible memory layouts. This /// would hardly be feasible as a simple enum, due to the sheer number of combinations of channel /// kinds, channel order, and bit depth. Rather, this type provides an opinionated selection with /// normalized channel order which can store common pixel values without loss. #[derive(Debug, PartialEq)] #[non_exhaustive] pub enum DynamicImage { /// Each pixel in this image is 8-bit Luma ImageLuma8(GrayImage), /// Each pixel in this image is 8-bit Luma with alpha ImageLumaA8(GrayAlphaImage), /// Each pixel in this image is 8-bit Rgb ImageRgb8(RgbImage), /// Each pixel in this image is 8-bit Rgb with alpha ImageRgba8(RgbaImage), /// Each pixel in this image is 16-bit Luma ImageLuma16(Gray16Image), /// Each pixel in this image is 16-bit Luma with alpha ImageLumaA16(GrayAlpha16Image), /// Each pixel in this image is 16-bit Rgb ImageRgb16(Rgb16Image), /// Each pixel in this image is 16-bit Rgb with alpha ImageRgba16(Rgba16Image), /// Each pixel in this image is 32-bit float Rgb ImageRgb32F(Rgb32FImage), /// Each pixel in this image is 32-bit float Rgb with alpha ImageRgba32F(Rgba32FImage), } macro_rules! dynamic_map( ($dynimage: expr, $image: pat => $action: expr) => ({ use DynamicImage::*; match $dynimage { ImageLuma8($image) => ImageLuma8($action), ImageLumaA8($image) => ImageLumaA8($action), ImageRgb8($image) => ImageRgb8($action), ImageRgba8($image) => ImageRgba8($action), ImageLuma16($image) => ImageLuma16($action), ImageLumaA16($image) => ImageLumaA16($action), ImageRgb16($image) => ImageRgb16($action), ImageRgba16($image) => ImageRgba16($action), ImageRgb32F($image) => ImageRgb32F($action), ImageRgba32F($image) => ImageRgba32F($action), } }); ($dynimage: expr, $image:pat_param, $action: expr) => ( match $dynimage { DynamicImage::ImageLuma8($image) => $action, DynamicImage::ImageLumaA8($image) => $action, DynamicImage::ImageRgb8($image) => $action, DynamicImage::ImageRgba8($image) => $action, DynamicImage::ImageLuma16($image) => $action, DynamicImage::ImageLumaA16($image) => $action, DynamicImage::ImageRgb16($image) => $action, DynamicImage::ImageRgba16($image) => $action, DynamicImage::ImageRgb32F($image) => $action, DynamicImage::ImageRgba32F($image) => $action, } ); ); impl Clone for DynamicImage { fn clone(&self) -> Self { dynamic_map!(*self, ref p, DynamicImage::from(p.clone())) } fn clone_from(&mut self, source: &Self) { match (self, source) { (Self::ImageLuma8(p1), Self::ImageLuma8(p2)) => p1.clone_from(p2), (Self::ImageLumaA8(p1), Self::ImageLumaA8(p2)) => p1.clone_from(p2), (Self::ImageRgb8(p1), Self::ImageRgb8(p2)) => p1.clone_from(p2), (Self::ImageRgba8(p1), Self::ImageRgba8(p2)) => p1.clone_from(p2), (Self::ImageLuma16(p1), Self::ImageLuma16(p2)) => p1.clone_from(p2), (Self::ImageLumaA16(p1), Self::ImageLumaA16(p2)) => p1.clone_from(p2), (Self::ImageRgb16(p1), Self::ImageRgb16(p2)) => p1.clone_from(p2), (Self::ImageRgba16(p1), Self::ImageRgba16(p2)) => p1.clone_from(p2), (Self::ImageRgb32F(p1), Self::ImageRgb32F(p2)) => p1.clone_from(p2), (Self::ImageRgba32F(p1), Self::ImageRgba32F(p2)) => p1.clone_from(p2), (this, source) => *this = source.clone(), } } } impl DynamicImage { /// Creates a dynamic image backed by a buffer depending on /// the color type given. #[must_use] pub fn new(w: u32, h: u32, color: color::ColorType) -> DynamicImage { use color::ColorType::*; match color { L8 => Self::new_luma8(w, h), La8 => Self::new_luma_a8(w, h), Rgb8 => Self::new_rgb8(w, h), Rgba8 => Self::new_rgba8(w, h), L16 => Self::new_luma16(w, h), La16 => Self::new_luma_a16(w, h), Rgb16 => Self::new_rgb16(w, h), Rgba16 => Self::new_rgba16(w, h), Rgb32F => Self::new_rgb32f(w, h), Rgba32F => Self::new_rgba32f(w, h), } } /// Creates a dynamic image backed by a buffer of gray pixels. #[must_use] pub fn new_luma8(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageLuma8(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of gray /// pixels with transparency. #[must_use] pub fn new_luma_a8(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageLumaA8(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of RGB pixels. #[must_use] pub fn new_rgb8(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageRgb8(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of RGBA pixels. #[must_use] pub fn new_rgba8(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageRgba8(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of gray pixels. #[must_use] pub fn new_luma16(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageLuma16(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of gray /// pixels with transparency. #[must_use] pub fn new_luma_a16(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageLumaA16(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of RGB pixels. #[must_use] pub fn new_rgb16(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageRgb16(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of RGBA pixels. #[must_use] pub fn new_rgba16(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageRgba16(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of RGB pixels. #[must_use] pub fn new_rgb32f(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageRgb32F(ImageBuffer::new(w, h)) } /// Creates a dynamic image backed by a buffer of RGBA pixels. #[must_use] pub fn new_rgba32f(w: u32, h: u32) -> DynamicImage { DynamicImage::ImageRgba32F(ImageBuffer::new(w, h)) } /// Decodes an encoded image into a dynamic image. pub fn from_decoder(decoder: impl ImageDecoder) -> ImageResult { decoder_to_image(decoder) } /// Returns a copy of this image as an RGB image. #[must_use] pub fn to_rgb8(&self) -> RgbImage { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as an RGB image. #[must_use] pub fn to_rgb16(&self) -> Rgb16Image { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as an RGB image. #[must_use] pub fn to_rgb32f(&self) -> Rgb32FImage { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as an RGBA image. #[must_use] pub fn to_rgba8(&self) -> RgbaImage { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as an RGBA image. #[must_use] pub fn to_rgba16(&self) -> Rgba16Image { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as an RGBA image. #[must_use] pub fn to_rgba32f(&self) -> Rgba32FImage { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as a Luma image. #[must_use] pub fn to_luma8(&self) -> GrayImage { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as a Luma image. #[must_use] pub fn to_luma16(&self) -> Gray16Image { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as a Luma image. #[must_use] pub fn to_luma32f(&self) -> ImageBuffer, Vec> { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as a `LumaA` image. #[must_use] pub fn to_luma_alpha8(&self) -> GrayAlphaImage { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as a `LumaA` image. #[must_use] pub fn to_luma_alpha16(&self) -> GrayAlpha16Image { dynamic_map!(*self, ref p, p.convert()) } /// Returns a copy of this image as a `LumaA` image. #[must_use] pub fn to_luma_alpha32f(&self) -> ImageBuffer, Vec> { dynamic_map!(*self, ref p, p.convert()) } /// Consume the image and returns a RGB image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_rgb8(self) -> RgbImage { match self { DynamicImage::ImageRgb8(x) => x, x => x.to_rgb8(), } } /// Consume the image and returns a RGB image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_rgb16(self) -> Rgb16Image { match self { DynamicImage::ImageRgb16(x) => x, x => x.to_rgb16(), } } /// Consume the image and returns a RGB image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_rgb32f(self) -> Rgb32FImage { match self { DynamicImage::ImageRgb32F(x) => x, x => x.to_rgb32f(), } } /// Consume the image and returns a RGBA image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_rgba8(self) -> RgbaImage { match self { DynamicImage::ImageRgba8(x) => x, x => x.to_rgba8(), } } /// Consume the image and returns a RGBA image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_rgba16(self) -> Rgba16Image { match self { DynamicImage::ImageRgba16(x) => x, x => x.to_rgba16(), } } /// Consume the image and returns a RGBA image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_rgba32f(self) -> Rgba32FImage { match self { DynamicImage::ImageRgba32F(x) => x, x => x.to_rgba32f(), } } /// Consume the image and returns a Luma image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_luma8(self) -> GrayImage { match self { DynamicImage::ImageLuma8(x) => x, x => x.to_luma8(), } } /// Consume the image and returns a Luma image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_luma16(self) -> Gray16Image { match self { DynamicImage::ImageLuma16(x) => x, x => x.to_luma16(), } } /// Consume the image and returns a `LumaA` image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_luma_alpha8(self) -> GrayAlphaImage { match self { DynamicImage::ImageLumaA8(x) => x, x => x.to_luma_alpha8(), } } /// Consume the image and returns a `LumaA` image. /// /// If the image was already the correct format, it is returned as is. /// Otherwise, a copy is created. #[must_use] pub fn into_luma_alpha16(self) -> GrayAlpha16Image { match self { DynamicImage::ImageLumaA16(x) => x, x => x.to_luma_alpha16(), } } /// Return a cut-out of this image delimited by the bounding rectangle. /// /// Note: this method does *not* modify the object, /// and its signature will be replaced with `crop_imm()`'s in the 0.24 release #[must_use] pub fn crop(&mut self, x: u32, y: u32, width: u32, height: u32) -> DynamicImage { dynamic_map!(*self, ref mut p => imageops::crop(p, x, y, width, height).to_image()) } /// Return a cut-out of this image delimited by the bounding rectangle. #[must_use] pub fn crop_imm(&self, x: u32, y: u32, width: u32, height: u32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::crop_imm(p, x, y, width, height).to_image()) } /// Return a reference to an 8bit RGB image #[must_use] pub fn as_rgb8(&self) -> Option<&RgbImage> { match *self { DynamicImage::ImageRgb8(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 8bit RGB image pub fn as_mut_rgb8(&mut self) -> Option<&mut RgbImage> { match *self { DynamicImage::ImageRgb8(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 8bit RGBA image #[must_use] pub fn as_rgba8(&self) -> Option<&RgbaImage> { match *self { DynamicImage::ImageRgba8(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 8bit RGBA image pub fn as_mut_rgba8(&mut self) -> Option<&mut RgbaImage> { match *self { DynamicImage::ImageRgba8(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 8bit Grayscale image #[must_use] pub fn as_luma8(&self) -> Option<&GrayImage> { match *self { DynamicImage::ImageLuma8(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 8bit Grayscale image pub fn as_mut_luma8(&mut self) -> Option<&mut GrayImage> { match *self { DynamicImage::ImageLuma8(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 8bit Grayscale image with an alpha channel #[must_use] pub fn as_luma_alpha8(&self) -> Option<&GrayAlphaImage> { match *self { DynamicImage::ImageLumaA8(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 8bit Grayscale image with an alpha channel pub fn as_mut_luma_alpha8(&mut self) -> Option<&mut GrayAlphaImage> { match *self { DynamicImage::ImageLumaA8(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 16bit RGB image #[must_use] pub fn as_rgb16(&self) -> Option<&Rgb16Image> { match *self { DynamicImage::ImageRgb16(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 16bit RGB image pub fn as_mut_rgb16(&mut self) -> Option<&mut Rgb16Image> { match *self { DynamicImage::ImageRgb16(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 16bit RGBA image #[must_use] pub fn as_rgba16(&self) -> Option<&Rgba16Image> { match *self { DynamicImage::ImageRgba16(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 16bit RGBA image pub fn as_mut_rgba16(&mut self) -> Option<&mut Rgba16Image> { match *self { DynamicImage::ImageRgba16(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 32bit RGB image #[must_use] pub fn as_rgb32f(&self) -> Option<&Rgb32FImage> { match *self { DynamicImage::ImageRgb32F(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 32bit RGB image pub fn as_mut_rgb32f(&mut self) -> Option<&mut Rgb32FImage> { match *self { DynamicImage::ImageRgb32F(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 32bit RGBA image #[must_use] pub fn as_rgba32f(&self) -> Option<&Rgba32FImage> { match *self { DynamicImage::ImageRgba32F(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 16bit RGBA image pub fn as_mut_rgba32f(&mut self) -> Option<&mut Rgba32FImage> { match *self { DynamicImage::ImageRgba32F(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 16bit Grayscale image #[must_use] pub fn as_luma16(&self) -> Option<&Gray16Image> { match *self { DynamicImage::ImageLuma16(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 16bit Grayscale image pub fn as_mut_luma16(&mut self) -> Option<&mut Gray16Image> { match *self { DynamicImage::ImageLuma16(ref mut p) => Some(p), _ => None, } } /// Return a reference to an 16bit Grayscale image with an alpha channel #[must_use] pub fn as_luma_alpha16(&self) -> Option<&GrayAlpha16Image> { match *self { DynamicImage::ImageLumaA16(ref p) => Some(p), _ => None, } } /// Return a mutable reference to an 16bit Grayscale image with an alpha channel pub fn as_mut_luma_alpha16(&mut self) -> Option<&mut GrayAlpha16Image> { match *self { DynamicImage::ImageLumaA16(ref mut p) => Some(p), _ => None, } } /// Return a view on the raw sample buffer for 8 bit per channel images. #[must_use] pub fn as_flat_samples_u8(&self) -> Option> { match *self { DynamicImage::ImageLuma8(ref p) => Some(p.as_flat_samples()), DynamicImage::ImageLumaA8(ref p) => Some(p.as_flat_samples()), DynamicImage::ImageRgb8(ref p) => Some(p.as_flat_samples()), DynamicImage::ImageRgba8(ref p) => Some(p.as_flat_samples()), _ => None, } } /// Return a view on the raw sample buffer for 16 bit per channel images. #[must_use] pub fn as_flat_samples_u16(&self) -> Option> { match *self { DynamicImage::ImageLuma16(ref p) => Some(p.as_flat_samples()), DynamicImage::ImageLumaA16(ref p) => Some(p.as_flat_samples()), DynamicImage::ImageRgb16(ref p) => Some(p.as_flat_samples()), DynamicImage::ImageRgba16(ref p) => Some(p.as_flat_samples()), _ => None, } } /// Return a view on the raw sample buffer for 32bit per channel images. #[must_use] pub fn as_flat_samples_f32(&self) -> Option> { match *self { DynamicImage::ImageRgb32F(ref p) => Some(p.as_flat_samples()), DynamicImage::ImageRgba32F(ref p) => Some(p.as_flat_samples()), _ => None, } } /// Return this image's pixels as a native endian byte slice. #[must_use] pub fn as_bytes(&self) -> &[u8] { // we can do this because every variant contains an `ImageBuffer<_, Vec<_>>` dynamic_map!( *self, ref image_buffer, bytemuck::cast_slice(image_buffer.as_raw().as_ref()) ) } // TODO: choose a name under which to expose? fn inner_bytes(&self) -> &[u8] { // we can do this because every variant contains an `ImageBuffer<_, Vec<_>>` dynamic_map!( *self, ref image_buffer, bytemuck::cast_slice(image_buffer.inner_pixels()) ) } /// Return this image's pixels as a byte vector. If the `ImageBuffer` /// container is `Vec`, this operation is free. Otherwise, a copy /// is returned. #[must_use] pub fn into_bytes(self) -> Vec { // we can do this because every variant contains an `ImageBuffer<_, Vec<_>>` dynamic_map!(self, image_buffer, { match bytemuck::allocation::try_cast_vec(image_buffer.into_raw()) { Ok(vec) => vec, Err((_, vec)) => { // Fallback: vector requires an exact alignment and size match // Reuse of the allocation as done in the Ok branch only works if the // underlying container is exactly Vec (or compatible but that's the only // alternative at the time of writing). // In all other cases we must allocate a new vector with the 'same' contents. bytemuck::cast_slice(&vec).to_owned() } } }) } /// Return this image's color type. #[must_use] pub fn color(&self) -> color::ColorType { match *self { DynamicImage::ImageLuma8(_) => color::ColorType::L8, DynamicImage::ImageLumaA8(_) => color::ColorType::La8, DynamicImage::ImageRgb8(_) => color::ColorType::Rgb8, DynamicImage::ImageRgba8(_) => color::ColorType::Rgba8, DynamicImage::ImageLuma16(_) => color::ColorType::L16, DynamicImage::ImageLumaA16(_) => color::ColorType::La16, DynamicImage::ImageRgb16(_) => color::ColorType::Rgb16, DynamicImage::ImageRgba16(_) => color::ColorType::Rgba16, DynamicImage::ImageRgb32F(_) => color::ColorType::Rgb32F, DynamicImage::ImageRgba32F(_) => color::ColorType::Rgba32F, } } /// Returns the width of the underlying image #[must_use] pub fn width(&self) -> u32 { dynamic_map!(*self, ref p, { p.width() }) } /// Returns the height of the underlying image #[must_use] pub fn height(&self) -> u32 { dynamic_map!(*self, ref p, { p.height() }) } /// Return a grayscale version of this image. /// Returns `Luma` images in most cases. However, for `f32` images, /// this will return a grayscale `Rgb/Rgba` image instead. #[must_use] pub fn grayscale(&self) -> DynamicImage { match *self { DynamicImage::ImageLuma8(ref p) => DynamicImage::ImageLuma8(p.clone()), DynamicImage::ImageLumaA8(ref p) => { DynamicImage::ImageLumaA8(imageops::grayscale_alpha(p)) } DynamicImage::ImageRgb8(ref p) => DynamicImage::ImageLuma8(imageops::grayscale(p)), DynamicImage::ImageRgba8(ref p) => { DynamicImage::ImageLumaA8(imageops::grayscale_alpha(p)) } DynamicImage::ImageLuma16(ref p) => DynamicImage::ImageLuma16(p.clone()), DynamicImage::ImageLumaA16(ref p) => { DynamicImage::ImageLumaA16(imageops::grayscale_alpha(p)) } DynamicImage::ImageRgb16(ref p) => DynamicImage::ImageLuma16(imageops::grayscale(p)), DynamicImage::ImageRgba16(ref p) => { DynamicImage::ImageLumaA16(imageops::grayscale_alpha(p)) } DynamicImage::ImageRgb32F(ref p) => { DynamicImage::ImageRgb32F(imageops::grayscale_with_type(p)) } DynamicImage::ImageRgba32F(ref p) => { DynamicImage::ImageRgba32F(imageops::grayscale_with_type_alpha(p)) } } } /// Invert the colors of this image. /// This method operates inplace. pub fn invert(&mut self) { dynamic_map!(*self, ref mut p, imageops::invert(p)); } /// Resize this image using the specified filter algorithm. /// Returns a new image. The image's aspect ratio is preserved. /// The image is scaled to the maximum possible size that fits /// within the bounds specified by `nwidth` and `nheight`. #[must_use] pub fn resize(&self, nwidth: u32, nheight: u32, filter: imageops::FilterType) -> DynamicImage { if (nwidth, nheight) == self.dimensions() { return self.clone(); } let (width2, height2) = resize_dimensions(self.width(), self.height(), nwidth, nheight, false); self.resize_exact(width2, height2, filter) } /// Resize this image using the specified filter algorithm. /// Returns a new image. Does not preserve aspect ratio. /// `nwidth` and `nheight` are the new image's dimensions #[must_use] pub fn resize_exact( &self, nwidth: u32, nheight: u32, filter: imageops::FilterType, ) -> DynamicImage { dynamic_map!(*self, ref p => imageops::resize(p, nwidth, nheight, filter)) } /// Scale this image down to fit within a specific size. /// Returns a new image. The image's aspect ratio is preserved. /// The image is scaled to the maximum possible size that fits /// within the bounds specified by `nwidth` and `nheight`. /// /// This method uses a fast integer algorithm where each source /// pixel contributes to exactly one target pixel. /// May give aliasing artifacts if new size is close to old size. #[must_use] pub fn thumbnail(&self, nwidth: u32, nheight: u32) -> DynamicImage { let (width2, height2) = resize_dimensions(self.width(), self.height(), nwidth, nheight, false); self.thumbnail_exact(width2, height2) } /// Scale this image down to a specific size. /// Returns a new image. Does not preserve aspect ratio. /// `nwidth` and `nheight` are the new image's dimensions. /// This method uses a fast integer algorithm where each source /// pixel contributes to exactly one target pixel. /// May give aliasing artifacts if new size is close to old size. #[must_use] pub fn thumbnail_exact(&self, nwidth: u32, nheight: u32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::thumbnail(p, nwidth, nheight)) } /// Resize this image using the specified filter algorithm. /// Returns a new image. The image's aspect ratio is preserved. /// The image is scaled to the maximum possible size that fits /// within the larger (relative to aspect ratio) of the bounds /// specified by `nwidth` and `nheight`, then cropped to /// fit within the other bound. #[must_use] pub fn resize_to_fill( &self, nwidth: u32, nheight: u32, filter: imageops::FilterType, ) -> DynamicImage { let (width2, height2) = resize_dimensions(self.width(), self.height(), nwidth, nheight, true); let mut intermediate = self.resize_exact(width2, height2, filter); let (iwidth, iheight) = intermediate.dimensions(); let ratio = u64::from(iwidth) * u64::from(nheight); let nratio = u64::from(nwidth) * u64::from(iheight); if nratio > ratio { intermediate.crop(0, (iheight - nheight) / 2, nwidth, nheight) } else { intermediate.crop((iwidth - nwidth) / 2, 0, nwidth, nheight) } } /// Performs a Gaussian blur on this image. /// `sigma` is a measure of how much to blur by. /// Use [DynamicImage::fast_blur()] for a faster but less /// accurate version. #[must_use] pub fn blur(&self, sigma: f32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::blur(p, sigma)) } /// Performs a fast blur on this image. /// `sigma` is the standard deviation of the /// (approximated) Gaussian #[must_use] pub fn fast_blur(&self, sigma: f32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::fast_blur(p, sigma)) } /// Performs an unsharpen mask on this image. /// `sigma` is the amount to blur the image by. /// `threshold` is a control of how much to sharpen. /// /// See #[must_use] pub fn unsharpen(&self, sigma: f32, threshold: i32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::unsharpen(p, sigma, threshold)) } /// Filters this image with the specified 3x3 kernel. #[must_use] pub fn filter3x3(&self, kernel: &[f32]) -> DynamicImage { assert_eq!(9, kernel.len(), "filter must be 3 x 3"); dynamic_map!(*self, ref p => imageops::filter3x3(p, kernel)) } /// Adjust the contrast of this image. /// `contrast` is the amount to adjust the contrast by. /// Negative values decrease the contrast and positive values increase the contrast. #[must_use] pub fn adjust_contrast(&self, c: f32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::contrast(p, c)) } /// Brighten the pixels of this image. /// `value` is the amount to brighten each pixel by. /// Negative values decrease the brightness and positive values increase it. #[must_use] pub fn brighten(&self, value: i32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::brighten(p, value)) } /// Hue rotate the supplied image. /// `value` is the degrees to rotate each pixel by. /// 0 and 360 do nothing, the rest rotates by the given degree value. /// just like the css webkit filter hue-rotate(180) #[must_use] pub fn huerotate(&self, value: i32) -> DynamicImage { dynamic_map!(*self, ref p => imageops::huerotate(p, value)) } /// Flip this image vertically /// /// Use [`apply_orientation`](Self::apply_orientation) if you want to flip the image in-place instead. #[must_use] pub fn flipv(&self) -> DynamicImage { dynamic_map!(*self, ref p => imageops::flip_vertical(p)) } /// Flip this image vertically in place fn flipv_in_place(&mut self) { dynamic_map!(*self, ref mut p, imageops::flip_vertical_in_place(p)) } /// Flip this image horizontally /// /// Use [`apply_orientation`](Self::apply_orientation) if you want to flip the image in-place. #[must_use] pub fn fliph(&self) -> DynamicImage { dynamic_map!(*self, ref p => imageops::flip_horizontal(p)) } /// Flip this image horizontally in place fn fliph_in_place(&mut self) { dynamic_map!(*self, ref mut p, imageops::flip_horizontal_in_place(p)) } /// Rotate this image 90 degrees clockwise. #[must_use] pub fn rotate90(&self) -> DynamicImage { dynamic_map!(*self, ref p => imageops::rotate90(p)) } /// Rotate this image 180 degrees. /// /// Use [`apply_orientation`](Self::apply_orientation) if you want to rotate the image in-place. #[must_use] pub fn rotate180(&self) -> DynamicImage { dynamic_map!(*self, ref p => imageops::rotate180(p)) } /// Rotate this image 180 degrees in place. fn rotate180_in_place(&mut self) { dynamic_map!(*self, ref mut p, imageops::rotate180_in_place(p)) } /// Rotate this image 270 degrees clockwise. #[must_use] pub fn rotate270(&self) -> DynamicImage { dynamic_map!(*self, ref p => imageops::rotate270(p)) } /// Rotates and/or flips the image as indicated by [Orientation]. /// /// This can be used to apply Exif orientation to an image, /// e.g. to correctly display a photo taken by a smartphone camera: /// /// ``` /// # fn only_check_if_this_compiles() -> Result<(), Box> { /// use image::{DynamicImage, ImageReader, ImageDecoder}; /// /// let mut decoder = ImageReader::open("file.jpg")?.into_decoder()?; /// let orientation = decoder.orientation()?; /// let mut image = DynamicImage::from_decoder(decoder)?; /// image.apply_orientation(orientation); /// # Ok(()) /// # } /// ``` /// /// Note that for some orientations cannot be efficiently applied in-place. /// In that case this function will make a copy of the image internally. /// /// If this matters to you, please see the documentation on the variants of [Orientation] /// to learn which orientations can and cannot be applied without copying. pub fn apply_orientation(&mut self, orientation: Orientation) { let image = self; match orientation { Orientation::NoTransforms => (), Orientation::Rotate90 => *image = image.rotate90(), Orientation::Rotate180 => image.rotate180_in_place(), Orientation::Rotate270 => *image = image.rotate270(), Orientation::FlipHorizontal => image.fliph_in_place(), Orientation::FlipVertical => image.flipv_in_place(), Orientation::Rotate90FlipH => { let mut new_image = image.rotate90(); new_image.fliph_in_place(); *image = new_image; } Orientation::Rotate270FlipH => { let mut new_image = image.rotate270(); new_image.fliph_in_place(); *image = new_image; } } } /// Encode this image and write it to ```w```. /// /// Assumes the writer is buffered. In most cases, /// you should wrap your writer in a `BufWriter` for best performance. pub fn write_to(&self, w: &mut W, format: ImageFormat) -> ImageResult<()> { let bytes = self.inner_bytes(); let (width, height) = self.dimensions(); let color: ExtendedColorType = self.color().into(); // TODO do not repeat this match statement across the crate #[allow(deprecated)] match format { #[cfg(feature = "png")] ImageFormat::Png => { let p = png::PngEncoder::new(w); p.write_image(bytes, width, height, color)?; Ok(()) } #[cfg(feature = "gif")] ImageFormat::Gif => { let mut g = gif::GifEncoder::new(w); g.encode_frame(crate::animation::Frame::new(self.to_rgba8()))?; Ok(()) } format => write_buffer_with_format(w, bytes, width, height, color, format), } } /// Encode this image with the provided encoder. pub fn write_with_encoder(&self, encoder: impl ImageEncoder) -> ImageResult<()> { dynamic_map!(self, ref p, p.write_with_encoder(encoder)) } /// Saves the buffer to a file at the path specified. /// /// The image format is derived from the file extension. pub fn save(&self, path: Q) -> ImageResult<()> where Q: AsRef, { dynamic_map!(*self, ref p, p.save(path)) } /// Saves the buffer to a file at the specified path in /// the specified format. /// /// See [`save_buffer_with_format`](fn.save_buffer_with_format.html) for /// supported types. pub fn save_with_format(&self, path: Q, format: ImageFormat) -> ImageResult<()> where Q: AsRef, { dynamic_map!(*self, ref p, p.save_with_format(path, format)) } } impl From for DynamicImage { fn from(image: GrayImage) -> Self { DynamicImage::ImageLuma8(image) } } impl From for DynamicImage { fn from(image: GrayAlphaImage) -> Self { DynamicImage::ImageLumaA8(image) } } impl From for DynamicImage { fn from(image: RgbImage) -> Self { DynamicImage::ImageRgb8(image) } } impl From for DynamicImage { fn from(image: RgbaImage) -> Self { DynamicImage::ImageRgba8(image) } } impl From for DynamicImage { fn from(image: Gray16Image) -> Self { DynamicImage::ImageLuma16(image) } } impl From for DynamicImage { fn from(image: GrayAlpha16Image) -> Self { DynamicImage::ImageLumaA16(image) } } impl From for DynamicImage { fn from(image: Rgb16Image) -> Self { DynamicImage::ImageRgb16(image) } } impl From for DynamicImage { fn from(image: Rgba16Image) -> Self { DynamicImage::ImageRgba16(image) } } impl From for DynamicImage { fn from(image: Rgb32FImage) -> Self { DynamicImage::ImageRgb32F(image) } } impl From for DynamicImage { fn from(image: Rgba32FImage) -> Self { DynamicImage::ImageRgba32F(image) } } impl From, Vec>> for DynamicImage { fn from(image: ImageBuffer, Vec>) -> Self { DynamicImage::ImageRgb32F(image.convert()) } } impl From, Vec>> for DynamicImage { fn from(image: ImageBuffer, Vec>) -> Self { DynamicImage::ImageRgba32F(image.convert()) } } #[allow(deprecated)] impl GenericImageView for DynamicImage { type Pixel = color::Rgba; // TODO use f32 as default for best precision and unbounded color? fn dimensions(&self) -> (u32, u32) { dynamic_map!(*self, ref p, p.dimensions()) } fn get_pixel(&self, x: u32, y: u32) -> color::Rgba { dynamic_map!(*self, ref p, p.get_pixel(x, y).to_rgba().into_color()) } } #[allow(deprecated)] impl GenericImage for DynamicImage { fn put_pixel(&mut self, x: u32, y: u32, pixel: color::Rgba) { match *self { DynamicImage::ImageLuma8(ref mut p) => p.put_pixel(x, y, pixel.to_luma()), DynamicImage::ImageLumaA8(ref mut p) => p.put_pixel(x, y, pixel.to_luma_alpha()), DynamicImage::ImageRgb8(ref mut p) => p.put_pixel(x, y, pixel.to_rgb()), DynamicImage::ImageRgba8(ref mut p) => p.put_pixel(x, y, pixel), DynamicImage::ImageLuma16(ref mut p) => p.put_pixel(x, y, pixel.to_luma().into_color()), DynamicImage::ImageLumaA16(ref mut p) => { p.put_pixel(x, y, pixel.to_luma_alpha().into_color()); } DynamicImage::ImageRgb16(ref mut p) => p.put_pixel(x, y, pixel.to_rgb().into_color()), DynamicImage::ImageRgba16(ref mut p) => p.put_pixel(x, y, pixel.into_color()), DynamicImage::ImageRgb32F(ref mut p) => p.put_pixel(x, y, pixel.to_rgb().into_color()), DynamicImage::ImageRgba32F(ref mut p) => p.put_pixel(x, y, pixel.into_color()), } } fn blend_pixel(&mut self, x: u32, y: u32, pixel: color::Rgba) { match *self { DynamicImage::ImageLuma8(ref mut p) => p.blend_pixel(x, y, pixel.to_luma()), DynamicImage::ImageLumaA8(ref mut p) => p.blend_pixel(x, y, pixel.to_luma_alpha()), DynamicImage::ImageRgb8(ref mut p) => p.blend_pixel(x, y, pixel.to_rgb()), DynamicImage::ImageRgba8(ref mut p) => p.blend_pixel(x, y, pixel), DynamicImage::ImageLuma16(ref mut p) => { p.blend_pixel(x, y, pixel.to_luma().into_color()); } DynamicImage::ImageLumaA16(ref mut p) => { p.blend_pixel(x, y, pixel.to_luma_alpha().into_color()); } DynamicImage::ImageRgb16(ref mut p) => p.blend_pixel(x, y, pixel.to_rgb().into_color()), DynamicImage::ImageRgba16(ref mut p) => p.blend_pixel(x, y, pixel.into_color()), DynamicImage::ImageRgb32F(ref mut p) => { p.blend_pixel(x, y, pixel.to_rgb().into_color()); } DynamicImage::ImageRgba32F(ref mut p) => p.blend_pixel(x, y, pixel.into_color()), } } /// Do not use is function: It is unimplemented! fn get_pixel_mut(&mut self, _: u32, _: u32) -> &mut color::Rgba { unimplemented!() } } impl Default for DynamicImage { fn default() -> Self { Self::ImageRgba8(Default::default()) } } /// Decodes an image and stores it into a dynamic image fn decoder_to_image(decoder: I) -> ImageResult { let (w, h) = decoder.dimensions(); let color_type = decoder.color_type(); let image = match color_type { color::ColorType::Rgb8 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageRgb8) } color::ColorType::Rgba8 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageRgba8) } color::ColorType::L8 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageLuma8) } color::ColorType::La8 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageLumaA8) } color::ColorType::Rgb16 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageRgb16) } color::ColorType::Rgba16 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageRgba16) } color::ColorType::Rgb32F => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageRgb32F) } color::ColorType::Rgba32F => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageRgba32F) } color::ColorType::L16 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageLuma16) } color::ColorType::La16 => { let buf = image::decoder_to_vec(decoder)?; ImageBuffer::from_raw(w, h, buf).map(DynamicImage::ImageLumaA16) } }; match image { Some(image) => Ok(image), None => Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))), } } /// Open the image located at the path specified. /// The image's format is determined from the path's file extension. /// /// Try [`ImageReader`] for more advanced uses, including guessing the format based on the file's /// content before its path. pub fn open

(path: P) -> ImageResult where P: AsRef, { ImageReader::open(path)?.decode() } /// Read a tuple containing the (width, height) of the image located at the specified path. /// This is faster than fully loading the image and then getting its dimensions. /// /// Try [`ImageReader`] for more advanced uses, including guessing the format based on the file's /// content before its path or manually supplying the format. pub fn image_dimensions

(path: P) -> ImageResult<(u32, u32)> where P: AsRef, { ImageReader::open(path)?.into_dimensions() } /// Saves the supplied buffer to a file at the path specified. /// /// The image format is derived from the file extension. The buffer is assumed to have /// the correct format according to the specified color type. /// /// This will lead to corrupted files if the buffer contains malformed data. Currently only /// jpeg, png, ico, pnm, bmp, exr and tiff files are supported. pub fn save_buffer( path: impl AsRef, buf: &[u8], width: u32, height: u32, color: impl Into, ) -> ImageResult<()> { // thin wrapper function to strip generics before calling save_buffer_impl free_functions::save_buffer_impl(path.as_ref(), buf, width, height, color.into()) } /// Saves the supplied buffer to a file at the path specified /// in the specified format. /// /// The buffer is assumed to have the correct format according /// to the specified color type. /// This will lead to corrupted files if the buffer contains /// malformed data. Currently only jpeg, png, ico, bmp, exr and /// tiff files are supported. pub fn save_buffer_with_format( path: impl AsRef, buf: &[u8], width: u32, height: u32, color: impl Into, format: ImageFormat, ) -> ImageResult<()> { // thin wrapper function to strip generics free_functions::save_buffer_with_format_impl( path.as_ref(), buf, width, height, color.into(), format, ) } /// Writes the supplied buffer to a writer in the specified format. /// /// The buffer is assumed to have the correct format according to the specified color type. This /// will lead to corrupted writers if the buffer contains malformed data. /// /// Assumes the writer is buffered. In most cases, you should wrap your writer in a `BufWriter` for /// best performance. pub fn write_buffer_with_format( buffered_writer: &mut W, buf: &[u8], width: u32, height: u32, color: impl Into, format: ImageFormat, ) -> ImageResult<()> { // thin wrapper function to strip generics free_functions::write_buffer_impl(buffered_writer, buf, width, height, color.into(), format) } /// Create a new image from a byte slice /// /// Makes an educated guess about the image format. /// TGA is not supported by this function. /// /// Try [`ImageReader`] for more advanced uses. pub fn load_from_memory(buffer: &[u8]) -> ImageResult { let format = free_functions::guess_format(buffer)?; load_from_memory_with_format(buffer, format) } /// Create a new image from a byte slice /// /// This is just a simple wrapper that constructs an `std::io::Cursor` around the buffer and then /// calls `load` with that reader. /// /// Try [`ImageReader`] for more advanced uses. /// /// [`load`]: fn.load.html #[inline(always)] pub fn load_from_memory_with_format(buf: &[u8], format: ImageFormat) -> ImageResult { let b = io::Cursor::new(buf); free_functions::load(b, format) } #[cfg(test)] mod bench { #[bench] #[cfg(feature = "benchmarks")] fn bench_conversion(b: &mut test::Bencher) { let a = super::DynamicImage::ImageRgb8(crate::ImageBuffer::new(1000, 1000)); b.iter(|| a.to_luma8()); b.bytes = 1000 * 1000 * 3 } } #[cfg(test)] mod test { use crate::color::ColorType; #[test] fn test_empty_file() { assert!(super::load_from_memory(b"").is_err()); } #[cfg(feature = "jpeg")] #[test] fn image_dimensions() { let im_path = "./tests/images/jpg/progressive/cat.jpg"; let dims = super::image_dimensions(im_path).unwrap(); assert_eq!(dims, (320, 240)); } #[cfg(feature = "png")] #[test] fn open_16bpc_png() { let im_path = "./tests/images/png/16bpc/basn6a16.png"; let image = super::open(im_path).unwrap(); assert_eq!(image.color(), ColorType::Rgba16); } fn test_grayscale(mut img: super::DynamicImage, alpha_discarded: bool) { use crate::image::{GenericImage, GenericImageView}; img.put_pixel(0, 0, crate::color::Rgba([255, 0, 0, 100])); let expected_alpha = if alpha_discarded { 255 } else { 100 }; assert_eq!( img.grayscale().get_pixel(0, 0), crate::color::Rgba([54, 54, 54, expected_alpha]) ); } fn test_grayscale_alpha_discarded(img: super::DynamicImage) { test_grayscale(img, true); } fn test_grayscale_alpha_preserved(img: super::DynamicImage) { test_grayscale(img, false); } #[test] fn test_grayscale_luma8() { test_grayscale_alpha_discarded(super::DynamicImage::new_luma8(1, 1)); test_grayscale_alpha_discarded(super::DynamicImage::new(1, 1, ColorType::L8)); } #[test] fn test_grayscale_luma_a8() { test_grayscale_alpha_preserved(super::DynamicImage::new_luma_a8(1, 1)); test_grayscale_alpha_preserved(super::DynamicImage::new(1, 1, ColorType::La8)); } #[test] fn test_grayscale_rgb8() { test_grayscale_alpha_discarded(super::DynamicImage::new_rgb8(1, 1)); test_grayscale_alpha_discarded(super::DynamicImage::new(1, 1, ColorType::Rgb8)); } #[test] fn test_grayscale_rgba8() { test_grayscale_alpha_preserved(super::DynamicImage::new_rgba8(1, 1)); test_grayscale_alpha_preserved(super::DynamicImage::new(1, 1, ColorType::Rgba8)); } #[test] fn test_grayscale_luma16() { test_grayscale_alpha_discarded(super::DynamicImage::new_luma16(1, 1)); test_grayscale_alpha_discarded(super::DynamicImage::new(1, 1, ColorType::L16)); } #[test] fn test_grayscale_luma_a16() { test_grayscale_alpha_preserved(super::DynamicImage::new_luma_a16(1, 1)); test_grayscale_alpha_preserved(super::DynamicImage::new(1, 1, ColorType::La16)); } #[test] fn test_grayscale_rgb16() { test_grayscale_alpha_discarded(super::DynamicImage::new_rgb16(1, 1)); test_grayscale_alpha_discarded(super::DynamicImage::new(1, 1, ColorType::Rgb16)); } #[test] fn test_grayscale_rgba16() { test_grayscale_alpha_preserved(super::DynamicImage::new_rgba16(1, 1)); test_grayscale_alpha_preserved(super::DynamicImage::new(1, 1, ColorType::Rgba16)); } #[test] fn test_grayscale_rgb32f() { test_grayscale_alpha_discarded(super::DynamicImage::new_rgb32f(1, 1)); test_grayscale_alpha_discarded(super::DynamicImage::new(1, 1, ColorType::Rgb32F)); } #[test] fn test_grayscale_rgba32f() { test_grayscale_alpha_preserved(super::DynamicImage::new_rgba32f(1, 1)); test_grayscale_alpha_preserved(super::DynamicImage::new(1, 1, ColorType::Rgba32F)); } #[test] fn test_dynamic_image_default_implementation() { // Test that structs wrapping a DynamicImage are able to auto-derive the Default trait // ensures that DynamicImage implements Default (if it didn't, this would cause a compile error). #[derive(Default)] #[allow(dead_code)] struct Foo { _image: super::DynamicImage, } } #[test] fn test_to_vecu8() { let _ = super::DynamicImage::new_luma8(1, 1).into_bytes(); let _ = super::DynamicImage::new_luma16(1, 1).into_bytes(); } #[test] fn issue_1705_can_turn_16bit_image_into_bytes() { let pixels = vec![65535u16; 64 * 64]; let img = super::ImageBuffer::from_vec(64, 64, pixels).unwrap(); let img = super::DynamicImage::ImageLuma16(img); assert!(img.as_luma16().is_some()); let bytes: Vec = img.into_bytes(); assert_eq!(bytes, vec![0xFF; 64 * 64 * 2]); } } image-0.25.5/src/error.rs000064400000000000000000000414501046102023000132710ustar 00000000000000//! Contains detailed error representation. //! //! See the main [`ImageError`] which contains a variant for each specialized error type. The //! subtypes used in each variant are opaque by design. They can be roughly inspected through their //! respective `kind` methods which work similar to `std::io::Error::kind`. //! //! The error interface makes it possible to inspect the error of an underlying decoder or encoder, //! through the `Error::source` method. Note that this is not part of the stable interface and you //! may not rely on a particular error value for a particular operation. This means mainly that //! `image` does not promise to remain on a particular version of its underlying decoders but if //! you ensure to use the same version of the dependency (or at least of the error type) through //! external means then you could inspect the error type in slightly more detail. //! //! [`ImageError`]: enum.ImageError.html use std::error::Error; use std::{fmt, io}; use crate::color::ExtendedColorType; use crate::image::ImageFormat; /// The generic error type for image operations. /// /// This high level enum allows, by variant matching, a rough separation of concerns between /// underlying IO, the caller, format specifications, and the `image` implementation. #[derive(Debug)] pub enum ImageError { /// An error was encountered while decoding. /// /// This means that the input data did not conform to the specification of some image format, /// or that no format could be determined, or that it did not match format specific /// requirements set by the caller. Decoding(DecodingError), /// An error was encountered while encoding. /// /// The input image can not be encoded with the chosen format, for example because the /// specification has no representation for its color space or because a necessary conversion /// is ambiguous. In some cases it might also happen that the dimensions can not be used with /// the format. Encoding(EncodingError), /// An error was encountered in input arguments. /// /// This is a catch-all case for strictly internal operations such as scaling, conversions, /// etc. that involve no external format specifications. Parameter(ParameterError), /// Completing the operation would have required more resources than allowed. /// /// Errors of this type are limits set by the user or environment, *not* inherent in a specific /// format or operation that was executed. Limits(LimitError), /// An operation can not be completed by the chosen abstraction. /// /// This means that it might be possible for the operation to succeed in general but /// * it requires a disabled feature, /// * the implementation does not yet exist, or /// * no abstraction for a lower level could be found. Unsupported(UnsupportedError), /// An error occurred while interacting with the environment. IoError(io::Error), } /// The implementation for an operation was not provided. /// /// See the variant [`Unsupported`] for more documentation. /// /// [`Unsupported`]: enum.ImageError.html#variant.Unsupported #[derive(Debug)] pub struct UnsupportedError { format: ImageFormatHint, kind: UnsupportedErrorKind, } /// Details what feature is not supported. #[derive(Clone, Debug, Hash, PartialEq)] #[non_exhaustive] pub enum UnsupportedErrorKind { /// The required color type can not be handled. Color(ExtendedColorType), /// An image format is not supported. Format(ImageFormatHint), /// Some feature specified by string. /// This is discouraged and is likely to get deprecated (but not removed). GenericFeature(String), } /// An error was encountered while encoding an image. /// /// This is used as an opaque representation for the [`ImageError::Encoding`] variant. See its /// documentation for more information. /// /// [`ImageError::Encoding`]: enum.ImageError.html#variant.Encoding #[derive(Debug)] pub struct EncodingError { format: ImageFormatHint, underlying: Option>, } /// An error was encountered in inputs arguments. /// /// This is used as an opaque representation for the [`ImageError::Parameter`] variant. See its /// documentation for more information. /// /// [`ImageError::Parameter`]: enum.ImageError.html#variant.Parameter #[derive(Debug)] pub struct ParameterError { kind: ParameterErrorKind, underlying: Option>, } /// Details how a parameter is malformed. #[derive(Clone, Debug, Hash, PartialEq)] #[non_exhaustive] pub enum ParameterErrorKind { /// The dimensions passed are wrong. DimensionMismatch, /// Repeated an operation for which error that could not be cloned was emitted already. FailedAlready, /// A string describing the parameter. /// This is discouraged and is likely to get deprecated (but not removed). Generic(String), /// The end of the image has been reached. NoMoreData, } /// An error was encountered while decoding an image. /// /// This is used as an opaque representation for the [`ImageError::Decoding`] variant. See its /// documentation for more information. /// /// [`ImageError::Decoding`]: enum.ImageError.html#variant.Decoding #[derive(Debug)] pub struct DecodingError { format: ImageFormatHint, underlying: Option>, } /// Completing the operation would have required more resources than allowed. /// /// This is used as an opaque representation for the [`ImageError::Limits`] variant. See its /// documentation for more information. /// /// [`ImageError::Limits`]: enum.ImageError.html#variant.Limits #[derive(Debug)] pub struct LimitError { kind: LimitErrorKind, // do we need an underlying error? } /// Indicates the limit that prevented an operation from completing. /// /// Note that this enumeration is not exhaustive and may in the future be extended to provide more /// detailed information or to incorporate other resources types. #[derive(Clone, Debug, Hash, PartialEq, Eq)] #[non_exhaustive] #[allow(missing_copy_implementations)] // Might be non-Copy in the future. pub enum LimitErrorKind { /// The resulting image exceed dimension limits in either direction. DimensionError, /// The operation would have performed an allocation larger than allowed. InsufficientMemory, /// The specified strict limits are not supported for this operation Unsupported { /// The given limits limits: crate::Limits, /// The supported strict limits supported: crate::LimitSupport, }, } /// A best effort representation for image formats. #[derive(Clone, Debug, Hash, PartialEq)] #[non_exhaustive] pub enum ImageFormatHint { /// The format is known exactly. Exact(ImageFormat), /// The format can be identified by a name. Name(String), /// A common path extension for the format is known. PathExtension(std::path::PathBuf), /// The format is not known or could not be determined. Unknown, } impl UnsupportedError { /// Create an `UnsupportedError` for an image with details on the unsupported feature. /// /// If the operation was not connected to a particular image format then the hint may be /// `Unknown`. #[must_use] pub fn from_format_and_kind(format: ImageFormatHint, kind: UnsupportedErrorKind) -> Self { UnsupportedError { format, kind } } /// Returns the corresponding `UnsupportedErrorKind` of the error. #[must_use] pub fn kind(&self) -> UnsupportedErrorKind { self.kind.clone() } /// Returns the image format associated with this error. #[must_use] pub fn format_hint(&self) -> ImageFormatHint { self.format.clone() } } impl DecodingError { /// Create a `DecodingError` that stems from an arbitrary error of an underlying decoder. pub fn new(format: ImageFormatHint, err: impl Into>) -> Self { DecodingError { format, underlying: Some(err.into()), } } /// Create a `DecodingError` for an image format. /// /// The error will not contain any further information but is very easy to create. #[must_use] pub fn from_format_hint(format: ImageFormatHint) -> Self { DecodingError { format, underlying: None, } } /// Returns the image format associated with this error. #[must_use] pub fn format_hint(&self) -> ImageFormatHint { self.format.clone() } } impl EncodingError { /// Create an `EncodingError` that stems from an arbitrary error of an underlying encoder. pub fn new(format: ImageFormatHint, err: impl Into>) -> Self { EncodingError { format, underlying: Some(err.into()), } } /// Create an `EncodingError` for an image format. /// /// The error will not contain any further information but is very easy to create. #[must_use] pub fn from_format_hint(format: ImageFormatHint) -> Self { EncodingError { format, underlying: None, } } /// Return the image format associated with this error. #[must_use] pub fn format_hint(&self) -> ImageFormatHint { self.format.clone() } } impl ParameterError { /// Construct a `ParameterError` directly from a corresponding kind. #[must_use] pub fn from_kind(kind: ParameterErrorKind) -> Self { ParameterError { kind, underlying: None, } } /// Returns the corresponding `ParameterErrorKind` of the error. #[must_use] pub fn kind(&self) -> ParameterErrorKind { self.kind.clone() } } impl LimitError { /// Construct a generic `LimitError` directly from a corresponding kind. #[must_use] pub fn from_kind(kind: LimitErrorKind) -> Self { LimitError { kind } } /// Returns the corresponding `LimitErrorKind` of the error. #[must_use] pub fn kind(&self) -> LimitErrorKind { self.kind.clone() } } impl From for ImageError { fn from(err: io::Error) -> ImageError { ImageError::IoError(err) } } impl From for ImageFormatHint { fn from(format: ImageFormat) -> Self { ImageFormatHint::Exact(format) } } impl From<&'_ std::path::Path> for ImageFormatHint { fn from(path: &'_ std::path::Path) -> Self { match path.extension() { Some(ext) => ImageFormatHint::PathExtension(ext.into()), None => ImageFormatHint::Unknown, } } } impl From for UnsupportedError { fn from(hint: ImageFormatHint) -> Self { UnsupportedError { format: hint.clone(), kind: UnsupportedErrorKind::Format(hint), } } } /// Result of an image decoding/encoding process pub type ImageResult = Result; impl fmt::Display for ImageError { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { match self { ImageError::IoError(err) => err.fmt(fmt), ImageError::Decoding(err) => err.fmt(fmt), ImageError::Encoding(err) => err.fmt(fmt), ImageError::Parameter(err) => err.fmt(fmt), ImageError::Limits(err) => err.fmt(fmt), ImageError::Unsupported(err) => err.fmt(fmt), } } } impl Error for ImageError { fn source(&self) -> Option<&(dyn Error + 'static)> { match self { ImageError::IoError(err) => err.source(), ImageError::Decoding(err) => err.source(), ImageError::Encoding(err) => err.source(), ImageError::Parameter(err) => err.source(), ImageError::Limits(err) => err.source(), ImageError::Unsupported(err) => err.source(), } } } impl fmt::Display for UnsupportedError { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { match &self.kind { UnsupportedErrorKind::Format(ImageFormatHint::Unknown) => { write!(fmt, "The image format could not be determined",) } UnsupportedErrorKind::Format(format @ ImageFormatHint::PathExtension(_)) => write!( fmt, "The file extension {format} was not recognized as an image format", ), UnsupportedErrorKind::Format(format) => { write!(fmt, "The image format {format} is not supported",) } UnsupportedErrorKind::Color(color) => write!( fmt, "The encoder or decoder for {} does not support the color type `{:?}`", self.format, color, ), UnsupportedErrorKind::GenericFeature(message) => match &self.format { ImageFormatHint::Unknown => write!( fmt, "The decoder does not support the format feature {message}", ), other => write!( fmt, "The decoder for {other} does not support the format features {message}", ), }, } } } impl Error for UnsupportedError {} impl fmt::Display for ParameterError { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { match &self.kind { ParameterErrorKind::DimensionMismatch => write!( fmt, "The Image's dimensions are either too \ small or too large" ), ParameterErrorKind::FailedAlready => write!( fmt, "The end the image stream has been reached due to a previous error" ), ParameterErrorKind::Generic(message) => { write!(fmt, "The parameter is malformed: {message}",) } ParameterErrorKind::NoMoreData => write!(fmt, "The end of the image has been reached",), }?; if let Some(underlying) = &self.underlying { write!(fmt, "\n{underlying}")?; } Ok(()) } } impl Error for ParameterError { fn source(&self) -> Option<&(dyn Error + 'static)> { match &self.underlying { None => None, Some(source) => Some(&**source), } } } impl fmt::Display for EncodingError { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { match &self.underlying { Some(underlying) => write!( fmt, "Format error encoding {}:\n{}", self.format, underlying, ), None => write!(fmt, "Format error encoding {}", self.format,), } } } impl Error for EncodingError { fn source(&self) -> Option<&(dyn Error + 'static)> { match &self.underlying { None => None, Some(source) => Some(&**source), } } } impl fmt::Display for DecodingError { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { match &self.underlying { None => match self.format { ImageFormatHint::Unknown => write!(fmt, "Format error"), _ => write!(fmt, "Format error decoding {}", self.format), }, Some(underlying) => { write!(fmt, "Format error decoding {}: {}", self.format, underlying) } } } } impl Error for DecodingError { fn source(&self) -> Option<&(dyn Error + 'static)> { match &self.underlying { None => None, Some(source) => Some(&**source), } } } impl fmt::Display for LimitError { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { match self.kind { LimitErrorKind::InsufficientMemory => write!(fmt, "Memory limit exceeded"), LimitErrorKind::DimensionError => write!(fmt, "Image size exceeds limit"), LimitErrorKind::Unsupported { .. } => { write!(fmt, "The following strict limits are specified but not supported by the opertation: ")?; Ok(()) } } } } impl Error for LimitError {} impl fmt::Display for ImageFormatHint { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { match self { ImageFormatHint::Exact(format) => write!(fmt, "{format:?}"), ImageFormatHint::Name(name) => write!(fmt, "`{name}`"), ImageFormatHint::PathExtension(ext) => write!(fmt, "`.{ext:?}`"), ImageFormatHint::Unknown => write!(fmt, "`Unknown`"), } } } #[cfg(test)] mod tests { use super::*; use std::mem::size_of; #[allow(dead_code)] // This will fail to compile if the size of this type is large. const ASSERT_SMALLISH: usize = [0][(size_of::() >= 200) as usize]; #[test] fn test_send_sync_stability() { fn assert_send_sync() {} assert_send_sync::(); } } image-0.25.5/src/flat.rs000064400000000000000000001751031046102023000130710ustar 00000000000000//! Image representations for ffi. //! //! # Usage //! //! Imagine you want to offer a very simple ffi interface: The caller provides an image buffer and //! your program creates a thumbnail from it and dumps that image as `png`. This module is designed //! to help you transition from raw memory data to Rust representation. //! //! ```no_run //! use std::ptr; //! use std::slice; //! use image::Rgb; //! use image::flat::{FlatSamples, SampleLayout}; //! use image::imageops::thumbnail; //! //! #[no_mangle] //! pub extern "C" fn store_rgb8_compressed( //! data: *const u8, len: usize, //! layout: *const SampleLayout //! ) //! -> bool //! { //! let samples = unsafe { slice::from_raw_parts(data, len) }; //! let layout = unsafe { ptr::read(layout) }; //! //! let buffer = FlatSamples { //! samples, //! layout, //! color_hint: None, //! }; //! //! let view = match buffer.as_view::>() { //! Err(_) => return false, // Invalid layout. //! Ok(view) => view, //! }; //! //! thumbnail(&view, 64, 64) //! .save("output.png") //! .map(|_| true) //! .unwrap_or_else(|_| false) //! } //! ``` //! use std::marker::PhantomData; use std::ops::{Deref, Index, IndexMut}; use std::{cmp, error, fmt}; use num_traits::Zero; use crate::color::ColorType; use crate::error::{ DecodingError, ImageError, ImageFormatHint, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::image::{GenericImage, GenericImageView}; use crate::traits::Pixel; use crate::ImageBuffer; /// A flat buffer over a (multi channel) image. /// /// In contrast to `ImageBuffer`, this representation of a sample collection is much more lenient /// in the layout thereof. It also allows grouping by color planes instead of by pixel as long as /// the strides of each extent are constant. This struct itself has no invariants on the strides /// but not every possible configuration can be interpreted as a [`GenericImageView`] or /// [`GenericImage`]. The methods [`as_view`] and [`as_view_mut`] construct the actual implementors /// of these traits and perform necessary checks. To manually perform this and other layout checks /// use [`is_normal`] or [`has_aliased_samples`]. /// /// Instances can be constructed not only by hand. The buffer instances returned by library /// functions such as [`ImageBuffer::as_flat_samples`] guarantee that the conversion to a generic /// image or generic view succeeds. A very different constructor is [`with_monocolor`]. It uses a /// single pixel as the backing storage for an arbitrarily sized read-only raster by mapping each /// pixel to the same samples by setting some strides to `0`. /// /// [`GenericImage`]: ../trait.GenericImage.html /// [`GenericImageView`]: ../trait.GenericImageView.html /// [`ImageBuffer::as_flat_samples`]: ../struct.ImageBuffer.html#method.as_flat_samples /// [`is_normal`]: #method.is_normal /// [`has_aliased_samples`]: #method.has_aliased_samples /// [`as_view`]: #method.as_view /// [`as_view_mut`]: #method.as_view_mut /// [`with_monocolor`]: #method.with_monocolor #[derive(Clone, Debug)] pub struct FlatSamples { /// Underlying linear container holding sample values. pub samples: Buffer, /// A `repr(C)` description of the layout of buffer samples. pub layout: SampleLayout, /// Supplementary color information. /// /// You may keep this as `None` in most cases. This is NOT checked in `View` or other /// converters. It is intended mainly as a way for types that convert to this buffer type to /// attach their otherwise static color information. A dynamic image representation could /// however use this to resolve representational ambiguities such as the order of RGB channels. pub color_hint: Option, } /// A ffi compatible description of a sample buffer. #[repr(C)] #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] pub struct SampleLayout { /// The number of channels in the color representation of the image. pub channels: u8, /// Add this to an index to get to the sample in the next channel. pub channel_stride: usize, /// The width of the represented image. pub width: u32, /// Add this to an index to get to the next sample in x-direction. pub width_stride: usize, /// The height of the represented image. pub height: u32, /// Add this to an index to get to the next sample in y-direction. pub height_stride: usize, } /// Helper struct for an unnamed (stride, length) pair. #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)] struct Dim(usize, usize); impl SampleLayout { /// Describe a row-major image packed in all directions. /// /// The resulting will surely be `NormalForm::RowMajorPacked`. It can therefore be converted to /// safely to an `ImageBuffer` with a large enough underlying buffer. /// /// ``` /// # use image::flat::{NormalForm, SampleLayout}; /// let layout = SampleLayout::row_major_packed(3, 640, 480); /// assert!(layout.is_normal(NormalForm::RowMajorPacked)); /// ``` /// /// # Panics /// /// On platforms where `usize` has the same size as `u32` this panics when the resulting stride /// in the `height` direction would be larger than `usize::MAX`. On other platforms /// where it can surely accommodate `u8::MAX * u32::MAX, this can never happen. #[must_use] pub fn row_major_packed(channels: u8, width: u32, height: u32) -> Self { let height_stride = (channels as usize).checked_mul(width as usize).expect( "Row major packed image can not be described because it does not fit into memory", ); SampleLayout { channels, channel_stride: 1, width, width_stride: channels as usize, height, height_stride, } } /// Describe a column-major image packed in all directions. /// /// The resulting will surely be `NormalForm::ColumnMajorPacked`. This is not particularly /// useful for conversion but can be used to describe such a buffer without pitfalls. /// /// ``` /// # use image::flat::{NormalForm, SampleLayout}; /// let layout = SampleLayout::column_major_packed(3, 640, 480); /// assert!(layout.is_normal(NormalForm::ColumnMajorPacked)); /// ``` /// /// # Panics /// /// On platforms where `usize` has the same size as `u32` this panics when the resulting stride /// in the `width` direction would be larger than `usize::MAX`. On other platforms /// where it can surely accommodate `u8::MAX * u32::MAX, this can never happen. #[must_use] pub fn column_major_packed(channels: u8, width: u32, height: u32) -> Self { let width_stride = (channels as usize).checked_mul(height as usize).expect( "Column major packed image can not be described because it does not fit into memory", ); SampleLayout { channels, channel_stride: 1, height, height_stride: channels as usize, width, width_stride, } } /// Get the strides for indexing matrix-like `[(c, w, h)]`. /// /// For a row-major layout with grouped samples, this tuple is strictly /// increasing. #[must_use] pub fn strides_cwh(&self) -> (usize, usize, usize) { (self.channel_stride, self.width_stride, self.height_stride) } /// Get the dimensions `(channels, width, height)`. /// /// The interface is optimized for use with `strides_cwh` instead. The channel extent will be /// before width and height. #[must_use] pub fn extents(&self) -> (usize, usize, usize) { ( self.channels as usize, self.width as usize, self.height as usize, ) } /// Tuple of bounds in the order of coordinate inputs. /// /// This function should be used whenever working with image coordinates opposed to buffer /// coordinates. The only difference compared to `extents` is the output type. #[must_use] pub fn bounds(&self) -> (u8, u32, u32) { (self.channels, self.width, self.height) } /// Get the minimum length of a buffer such that all in-bounds samples have valid indices. /// /// This method will allow zero strides, allowing compact representations of monochrome images. /// To check that no aliasing occurs, try `check_alias_invariants`. For compact images (no /// aliasing and no unindexed samples) this is `width*height*channels`. But for both of the /// other cases, the reasoning is slightly more involved. /// /// # Explanation /// /// Note that there is a difference between `min_length` and the index of the sample /// 'one-past-the-end`. This is due to strides that may be larger than the dimension below. /// /// ## Example with holes /// /// Let's look at an example of a grayscale image with /// * `width_stride = 1` /// * `width = 2` /// * `height_stride = 3` /// * `height = 2` /// /// ```text /// | x x | x x m | $ /// min_length m ^ /// ^ one-past-the-end $ /// ``` /// /// The difference is also extreme for empty images with large strides. The one-past-the-end /// sample index is still as large as the largest of these strides while `min_length = 0`. /// /// ## Example with aliasing /// /// The concept gets even more important when you allow samples to alias each other. Here we /// have the buffer of a small grayscale image where this is the case, this time we will first /// show the buffer and then the individual rows below. /// /// * `width_stride = 1` /// * `width = 3` /// * `height_stride = 2` /// * `height = 2` /// /// ```text /// 1 2 3 4 5 m /// |1 2 3| row one /// |3 4 5| row two /// ^ m min_length /// ^ ??? one-past-the-end /// ``` /// /// This time 'one-past-the-end' is not even simply the largest stride times the extent of its /// dimension. That still points inside the image because `height*height_stride = 4` but also /// `index_of(1, 2) = 4`. #[must_use] pub fn min_length(&self) -> Option { if self.width == 0 || self.height == 0 || self.channels == 0 { return Some(0); } self.index(self.channels - 1, self.width - 1, self.height - 1) .and_then(|idx| idx.checked_add(1)) } /// Check if a buffer of length `len` is large enough. #[must_use] pub fn fits(&self, len: usize) -> bool { self.min_length().map_or(false, |min| len >= min) } /// The extents of this array, in order of increasing strides. fn increasing_stride_dims(&self) -> [Dim; 3] { // Order extents by strides, then check that each is less equal than the next stride. let mut grouped: [Dim; 3] = [ Dim(self.channel_stride, self.channels as usize), Dim(self.width_stride, self.width as usize), Dim(self.height_stride, self.height as usize), ]; grouped.sort(); let (min_dim, mid_dim, max_dim) = (grouped[0], grouped[1], grouped[2]); assert!(min_dim.stride() <= mid_dim.stride() && mid_dim.stride() <= max_dim.stride()); grouped } /// If there are any samples aliasing each other. /// /// If this is not the case, it would always be safe to allow mutable access to two different /// samples at the same time. Otherwise, this operation would need additional checks. When one /// dimension overflows `usize` with its stride we also consider this aliasing. #[must_use] pub fn has_aliased_samples(&self) -> bool { let grouped = self.increasing_stride_dims(); let (min_dim, mid_dim, max_dim) = (grouped[0], grouped[1], grouped[2]); let min_size = match min_dim.checked_len() { None => return true, Some(size) => size, }; let mid_size = match mid_dim.checked_len() { None => return true, Some(size) => size, }; if max_dim.checked_len().is_none() { return true; }; // Each higher dimension must walk over all of one lower dimension. min_size > mid_dim.stride() || mid_size > max_dim.stride() } /// Check if a buffer fulfills the requirements of a normal form. /// /// Certain conversions have preconditions on the structure of the sample buffer that are not /// captured (by design) by the type system. These are then checked before the conversion. Such /// checks can all be done in constant time and will not inspect the buffer content. You can /// perform these checks yourself when the conversion is not required at this moment but maybe /// still performed later. #[must_use] pub fn is_normal(&self, form: NormalForm) -> bool { if self.has_aliased_samples() { return false; } if form >= NormalForm::PixelPacked && self.channel_stride != 1 { return false; } if form >= NormalForm::ImagePacked { // has aliased already checked for overflows. let grouped = self.increasing_stride_dims(); let (min_dim, mid_dim, max_dim) = (grouped[0], grouped[1], grouped[2]); if 1 != min_dim.stride() { return false; } if min_dim.len() != mid_dim.stride() { return false; } if mid_dim.len() != max_dim.stride() { return false; } } if form >= NormalForm::RowMajorPacked { if self.width_stride != self.channels as usize { return false; } if self.width as usize * self.width_stride != self.height_stride { return false; } } if form >= NormalForm::ColumnMajorPacked { if self.height_stride != self.channels as usize { return false; } if self.height as usize * self.height_stride != self.width_stride { return false; } } true } /// Check that the pixel and the channel index are in bounds. /// /// An in-bound coordinate does not yet guarantee that the corresponding calculation of a /// buffer index does not overflow. However, if such a buffer large enough to hold all samples /// actually exists in memory, this property of course follows. #[must_use] pub fn in_bounds(&self, channel: u8, x: u32, y: u32) -> bool { channel < self.channels && x < self.width && y < self.height } /// Resolve the index of a particular sample. /// /// `None` if the index is outside the bounds or does not fit into a `usize`. #[must_use] pub fn index(&self, channel: u8, x: u32, y: u32) -> Option { if !self.in_bounds(channel, x, y) { return None; } self.index_ignoring_bounds(channel as usize, x as usize, y as usize) } /// Get the theoretical position of sample (channel, x, y). /// /// The 'check' is for overflow during index calculation, not that it is contained in the /// image. Two samples may return the same index, even when one of them is out of bounds. This /// happens when all strides are `0`, i.e. the image is an arbitrarily large monochrome image. #[must_use] pub fn index_ignoring_bounds(&self, channel: usize, x: usize, y: usize) -> Option { let idx_c = channel.checked_mul(self.channel_stride); let idx_x = x.checked_mul(self.width_stride); let idx_y = y.checked_mul(self.height_stride); let (Some(idx_c), Some(idx_x), Some(idx_y)) = (idx_c, idx_x, idx_y) else { return None; }; Some(0usize) .and_then(|b| b.checked_add(idx_c)) .and_then(|b| b.checked_add(idx_x)) .and_then(|b| b.checked_add(idx_y)) } /// Get an index provided it is inbouds. /// /// Assumes that the image is backed by some sufficiently large buffer. Then computation can /// not overflow as we could represent the maximum coordinate. Since overflow is defined either /// way, this method can not be unsafe. /// /// Behavior is *unspecified* if the index is out of bounds or this sample layout would require /// a buffer larger than `isize::MAX` bytes. #[must_use] pub fn in_bounds_index(&self, c: u8, x: u32, y: u32) -> usize { let (c_stride, x_stride, y_stride) = self.strides_cwh(); (y as usize * y_stride) + (x as usize * x_stride) + (c as usize * c_stride) } /// Shrink the image to the minimum of current and given extents. /// /// This does not modify the strides, so that the resulting sample buffer may have holes /// created by the shrinking operation. Shrinking could also lead to an non-aliasing image when /// samples had aliased each other before. pub fn shrink_to(&mut self, channels: u8, width: u32, height: u32) { self.channels = self.channels.min(channels); self.width = self.width.min(width); self.height = self.height.min(height); } } impl Dim { fn stride(self) -> usize { self.0 } /// Length of this dimension in memory. fn checked_len(self) -> Option { self.0.checked_mul(self.1) } fn len(self) -> usize { self.0 * self.1 } } impl FlatSamples { /// Get the strides for indexing matrix-like `[(c, w, h)]`. /// /// For a row-major layout with grouped samples, this tuple is strictly /// increasing. pub fn strides_cwh(&self) -> (usize, usize, usize) { self.layout.strides_cwh() } /// Get the dimensions `(channels, width, height)`. /// /// The interface is optimized for use with `strides_cwh` instead. The channel extent will be /// before width and height. pub fn extents(&self) -> (usize, usize, usize) { self.layout.extents() } /// Tuple of bounds in the order of coordinate inputs. /// /// This function should be used whenever working with image coordinates opposed to buffer /// coordinates. The only difference compared to `extents` is the output type. pub fn bounds(&self) -> (u8, u32, u32) { self.layout.bounds() } /// Get a reference based version. pub fn as_ref(&self) -> FlatSamples<&[T]> where Buffer: AsRef<[T]>, { FlatSamples { samples: self.samples.as_ref(), layout: self.layout, color_hint: self.color_hint, } } /// Get a mutable reference based version. pub fn as_mut(&mut self) -> FlatSamples<&mut [T]> where Buffer: AsMut<[T]>, { FlatSamples { samples: self.samples.as_mut(), layout: self.layout, color_hint: self.color_hint, } } /// Copy the data into an owned vector. pub fn to_vec(&self) -> FlatSamples> where T: Clone, Buffer: AsRef<[T]>, { FlatSamples { samples: self.samples.as_ref().to_vec(), layout: self.layout, color_hint: self.color_hint, } } /// Get a reference to a single sample. /// /// This more restrictive than the method based on `std::ops::Index` but guarantees to properly /// check all bounds and not panic as long as `Buffer::as_ref` does not do so. /// /// ``` /// # use image::{RgbImage}; /// let flat = RgbImage::new(480, 640).into_flat_samples(); /// /// // Get the blue channel at (10, 10). /// assert!(flat.get_sample(1, 10, 10).is_some()); /// /// // There is no alpha channel. /// assert!(flat.get_sample(3, 10, 10).is_none()); /// ``` /// /// For cases where a special buffer does not provide `AsRef<[T]>`, consider encapsulating /// bounds checks with `min_length` in a type similar to `View`. Then you may use /// `in_bounds_index` as a small speedup over the index calculation of this method which relies /// on `index_ignoring_bounds` since it can not have a-priori knowledge that the sample /// coordinate is in fact backed by any memory buffer. pub fn get_sample(&self, channel: u8, x: u32, y: u32) -> Option<&T> where Buffer: AsRef<[T]>, { self.index(channel, x, y) .and_then(|idx| self.samples.as_ref().get(idx)) } /// Get a mutable reference to a single sample. /// /// This more restrictive than the method based on `std::ops::IndexMut` but guarantees to /// properly check all bounds and not panic as long as `Buffer::as_ref` does not do so. /// Contrary to conversion to `ViewMut`, this does not require that samples are packed since it /// does not need to convert samples to a color representation. /// /// **WARNING**: Note that of course samples may alias, so that the mutable reference returned /// here can in fact modify more than the coordinate in the argument. /// /// ``` /// # use image::{RgbImage}; /// let mut flat = RgbImage::new(480, 640).into_flat_samples(); /// /// // Assign some new color to the blue channel at (10, 10). /// *flat.get_mut_sample(1, 10, 10).unwrap() = 255; /// /// // There is no alpha channel. /// assert!(flat.get_mut_sample(3, 10, 10).is_none()); /// ``` /// /// For cases where a special buffer does not provide `AsRef<[T]>`, consider encapsulating /// bounds checks with `min_length` in a type similar to `View`. Then you may use /// `in_bounds_index` as a small speedup over the index calculation of this method which relies /// on `index_ignoring_bounds` since it can not have a-priori knowledge that the sample /// coordinate is in fact backed by any memory buffer. pub fn get_mut_sample(&mut self, channel: u8, x: u32, y: u32) -> Option<&mut T> where Buffer: AsMut<[T]>, { match self.index(channel, x, y) { None => None, Some(idx) => self.samples.as_mut().get_mut(idx), } } /// View this buffer as an image over some type of pixel. /// /// This first ensures that all in-bounds coordinates refer to valid indices in the sample /// buffer. It also checks that the specified pixel format expects the same number of channels /// that are present in this buffer. Neither are larger nor a smaller number will be accepted. /// There is no automatic conversion. pub fn as_view

(&self) -> Result, Error> where P: Pixel, Buffer: AsRef<[P::Subpixel]>, { if self.layout.channels != P::CHANNEL_COUNT { return Err(Error::ChannelCountMismatch( self.layout.channels, P::CHANNEL_COUNT, )); } let as_ref = self.samples.as_ref(); if !self.layout.fits(as_ref.len()) { return Err(Error::TooLarge); } Ok(View { inner: FlatSamples { samples: as_ref, layout: self.layout, color_hint: self.color_hint, }, phantom: PhantomData, }) } /// View this buffer but keep mutability at a sample level. /// /// This is similar to `as_view` but subtly different from `as_view_mut`. The resulting type /// can be used as a `GenericImage` with the same prior invariants needed as for `as_view`. /// It can not be used as a mutable `GenericImage` but does not need channels to be packed in /// their pixel representation. /// /// This first ensures that all in-bounds coordinates refer to valid indices in the sample /// buffer. It also checks that the specified pixel format expects the same number of channels /// that are present in this buffer. Neither are larger nor a smaller number will be accepted. /// There is no automatic conversion. /// /// **WARNING**: Note that of course samples may alias, so that the mutable reference returned /// for one sample can in fact modify other samples as well. Sometimes exactly this is /// intended. pub fn as_view_with_mut_samples

(&mut self) -> Result, Error> where P: Pixel, Buffer: AsMut<[P::Subpixel]>, { if self.layout.channels != P::CHANNEL_COUNT { return Err(Error::ChannelCountMismatch( self.layout.channels, P::CHANNEL_COUNT, )); } let as_mut = self.samples.as_mut(); if !self.layout.fits(as_mut.len()) { return Err(Error::TooLarge); } Ok(View { inner: FlatSamples { samples: as_mut, layout: self.layout, color_hint: self.color_hint, }, phantom: PhantomData, }) } /// Interpret this buffer as a mutable image. /// /// To succeed, the pixels in this buffer may not alias each other and the samples of each /// pixel must be packed (i.e. `channel_stride` is `1`). The number of channels must be /// consistent with the channel count expected by the pixel format. /// /// This is similar to an `ImageBuffer` except it is a temporary view that is not normalized as /// strongly. To get an owning version, consider copying the data into an `ImageBuffer`. This /// provides many more operations, is possibly faster (if not you may want to open an issue) is /// generally polished. You can also try to convert this buffer inline, see /// `ImageBuffer::from_raw`. pub fn as_view_mut

(&mut self) -> Result, Error> where P: Pixel, Buffer: AsMut<[P::Subpixel]>, { if !self.layout.is_normal(NormalForm::PixelPacked) { return Err(Error::NormalFormRequired(NormalForm::PixelPacked)); } if self.layout.channels != P::CHANNEL_COUNT { return Err(Error::ChannelCountMismatch( self.layout.channels, P::CHANNEL_COUNT, )); } let as_mut = self.samples.as_mut(); if !self.layout.fits(as_mut.len()) { return Err(Error::TooLarge); } Ok(ViewMut { inner: FlatSamples { samples: as_mut, layout: self.layout, color_hint: self.color_hint, }, phantom: PhantomData, }) } /// View the samples as a slice. /// /// The slice is not limited to the region of the image and not all sample indices are valid /// indices into this buffer. See `image_mut_slice` as an alternative. pub fn as_slice(&self) -> &[T] where Buffer: AsRef<[T]>, { self.samples.as_ref() } /// View the samples as a slice. /// /// The slice is not limited to the region of the image and not all sample indices are valid /// indices into this buffer. See `image_mut_slice` as an alternative. pub fn as_mut_slice(&mut self) -> &mut [T] where Buffer: AsMut<[T]>, { self.samples.as_mut() } /// Return the portion of the buffer that holds sample values. /// /// This may fail when the coordinates in this image are either out-of-bounds of the underlying /// buffer or can not be represented. Note that the slice may have holes that do not correspond /// to any sample in the image represented by it. pub fn image_slice(&self) -> Option<&[T]> where Buffer: AsRef<[T]>, { let min_length = match self.min_length() { None => return None, Some(index) => index, }; let slice = self.samples.as_ref(); if slice.len() < min_length { return None; } Some(&slice[..min_length]) } /// Mutable portion of the buffer that holds sample values. pub fn image_mut_slice(&mut self) -> Option<&mut [T]> where Buffer: AsMut<[T]>, { let min_length = match self.min_length() { None => return None, Some(index) => index, }; let slice = self.samples.as_mut(); if slice.len() < min_length { return None; } Some(&mut slice[..min_length]) } /// Move the data into an image buffer. /// /// This does **not** convert the sample layout. The buffer needs to be in packed row-major form /// before calling this function. In case of an error, returns the buffer again so that it does /// not release any allocation. pub fn try_into_buffer

(self) -> Result, (Error, Self)> where P: Pixel + 'static, P::Subpixel: 'static, Buffer: Deref, { if !self.is_normal(NormalForm::RowMajorPacked) { return Err((Error::NormalFormRequired(NormalForm::RowMajorPacked), self)); } if self.layout.channels != P::CHANNEL_COUNT { return Err(( Error::ChannelCountMismatch(self.layout.channels, P::CHANNEL_COUNT), self, )); } if !self.fits(self.samples.deref().len()) { return Err((Error::TooLarge, self)); } Ok( ImageBuffer::from_raw(self.layout.width, self.layout.height, self.samples) .unwrap_or_else(|| { panic!("Preconditions should have been ensured before conversion") }), ) } /// Get the minimum length of a buffer such that all in-bounds samples have valid indices. /// /// This method will allow zero strides, allowing compact representations of monochrome images. /// To check that no aliasing occurs, try `check_alias_invariants`. For compact images (no /// aliasing and no unindexed samples) this is `width*height*channels`. But for both of the /// other cases, the reasoning is slightly more involved. /// /// # Explanation /// /// Note that there is a difference between `min_length` and the index of the sample /// 'one-past-the-end`. This is due to strides that may be larger than the dimension below. /// /// ## Example with holes /// /// Let's look at an example of a grayscale image with /// * `width_stride = 1` /// * `width = 2` /// * `height_stride = 3` /// * `height = 2` /// /// ```text /// | x x | x x m | $ /// min_length m ^ /// ^ one-past-the-end $ /// ``` /// /// The difference is also extreme for empty images with large strides. The one-past-the-end /// sample index is still as large as the largest of these strides while `min_length = 0`. /// /// ## Example with aliasing /// /// The concept gets even more important when you allow samples to alias each other. Here we /// have the buffer of a small grayscale image where this is the case, this time we will first /// show the buffer and then the individual rows below. /// /// * `width_stride = 1` /// * `width = 3` /// * `height_stride = 2` /// * `height = 2` /// /// ```text /// 1 2 3 4 5 m /// |1 2 3| row one /// |3 4 5| row two /// ^ m min_length /// ^ ??? one-past-the-end /// ``` /// /// This time 'one-past-the-end' is not even simply the largest stride times the extent of its /// dimension. That still points inside the image because `height*height_stride = 4` but also /// `index_of(1, 2) = 4`. pub fn min_length(&self) -> Option { self.layout.min_length() } /// Check if a buffer of length `len` is large enough. pub fn fits(&self, len: usize) -> bool { self.layout.fits(len) } /// If there are any samples aliasing each other. /// /// If this is not the case, it would always be safe to allow mutable access to two different /// samples at the same time. Otherwise, this operation would need additional checks. When one /// dimension overflows `usize` with its stride we also consider this aliasing. pub fn has_aliased_samples(&self) -> bool { self.layout.has_aliased_samples() } /// Check if a buffer fulfills the requirements of a normal form. /// /// Certain conversions have preconditions on the structure of the sample buffer that are not /// captured (by design) by the type system. These are then checked before the conversion. Such /// checks can all be done in constant time and will not inspect the buffer content. You can /// perform these checks yourself when the conversion is not required at this moment but maybe /// still performed later. pub fn is_normal(&self, form: NormalForm) -> bool { self.layout.is_normal(form) } /// Check that the pixel and the channel index are in bounds. /// /// An in-bound coordinate does not yet guarantee that the corresponding calculation of a /// buffer index does not overflow. However, if such a buffer large enough to hold all samples /// actually exists in memory, this property of course follows. pub fn in_bounds(&self, channel: u8, x: u32, y: u32) -> bool { self.layout.in_bounds(channel, x, y) } /// Resolve the index of a particular sample. /// /// `None` if the index is outside the bounds or does not fit into a `usize`. pub fn index(&self, channel: u8, x: u32, y: u32) -> Option { self.layout.index(channel, x, y) } /// Get the theoretical position of sample (x, y, channel). /// /// The 'check' is for overflow during index calculation, not that it is contained in the /// image. Two samples may return the same index, even when one of them is out of bounds. This /// happens when all strides are `0`, i.e. the image is an arbitrarily large monochrome image. pub fn index_ignoring_bounds(&self, channel: usize, x: usize, y: usize) -> Option { self.layout.index_ignoring_bounds(channel, x, y) } /// Get an index provided it is inbouds. /// /// Assumes that the image is backed by some sufficiently large buffer. Then computation can /// not overflow as we could represent the maximum coordinate. Since overflow is defined either /// way, this method can not be unsafe. pub fn in_bounds_index(&self, channel: u8, x: u32, y: u32) -> usize { self.layout.in_bounds_index(channel, x, y) } /// Shrink the image to the minimum of current and given extents. /// /// This does not modify the strides, so that the resulting sample buffer may have holes /// created by the shrinking operation. Shrinking could also lead to an non-aliasing image when /// samples had aliased each other before. pub fn shrink_to(&mut self, channels: u8, width: u32, height: u32) { self.layout.shrink_to(channels, width, height); } } impl<'buf, Subpixel> FlatSamples<&'buf [Subpixel]> { /// Create a monocolor image from a single pixel. /// /// This can be used as a very cheap source of a `GenericImageView` with an arbitrary number of /// pixels of a single color, without any dynamic allocation. /// /// ## Examples /// /// ``` /// # fn paint_something(_: T) {} /// use image::{flat::FlatSamples, GenericImage, RgbImage, Rgb}; /// /// let background = Rgb([20, 20, 20]); /// let bg = FlatSamples::with_monocolor(&background, 200, 200);; /// /// let mut image = RgbImage::new(200, 200); /// paint_something(&mut image); /// /// // Reset the canvas /// image.copy_from(&bg.as_view().unwrap(), 0, 0); /// ``` pub fn with_monocolor

(pixel: &'buf P, width: u32, height: u32) -> Self where P: Pixel, Subpixel: crate::Primitive, { FlatSamples { samples: pixel.channels(), layout: SampleLayout { channels: P::CHANNEL_COUNT, channel_stride: 1, width, width_stride: 0, height, height_stride: 0, }, // TODO this value is never set. It should be set in all places where the Pixel type implements PixelWithColorType color_hint: None, } } } /// A flat buffer that can be used as an image view. /// /// This is a nearly trivial wrapper around a buffer but at least sanitizes by checking the buffer /// length first and constraining the pixel type. /// /// Note that this does not eliminate panics as the `AsRef<[T]>` implementation of `Buffer` may be /// unreliable, i.e. return different buffers at different times. This of course is a non-issue for /// all common collections where the bounds check once must be enough. /// /// # Inner invariants /// /// * For all indices inside bounds, the corresponding index is valid in the buffer /// * `P::channel_count()` agrees with `self.inner.layout.channels` /// #[derive(Clone, Debug)] pub struct View where Buffer: AsRef<[P::Subpixel]>, { inner: FlatSamples, phantom: PhantomData

, } /// A mutable owning version of a flat buffer. /// /// While this wraps a buffer similar to `ImageBuffer`, this is mostly intended as a utility. The /// library endorsed normalized representation is still `ImageBuffer`. Also, the implementation of /// `AsMut<[P::Subpixel]>` must always yield the same buffer. Therefore there is no public way to /// construct this with an owning buffer. /// /// # Inner invariants /// /// * For all indices inside bounds, the corresponding index is valid in the buffer /// * There is no aliasing of samples /// * The samples are packed, i.e. `self.inner.layout.sample_stride == 1` /// * `P::channel_count()` agrees with `self.inner.layout.channels` /// #[derive(Clone, Debug)] pub struct ViewMut where Buffer: AsMut<[P::Subpixel]>, { inner: FlatSamples, phantom: PhantomData

, } /// Denotes invalid flat sample buffers when trying to convert to stricter types. /// /// The biggest use case being `ImageBuffer` which expects closely packed /// samples in a row major matrix representation. But this error type may be /// resused for other import functions. A more versatile user may also try to /// correct the underlying representation depending on the error variant. #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] pub enum Error { /// The represented image was too large. /// /// The optional value denotes a possibly accepted maximal bound. TooLarge, /// The represented image can not use this representation. /// /// Has an additional value of the normalized form that would be accepted. NormalFormRequired(NormalForm), /// The color format did not match the channel count. /// /// In some cases you might be able to fix this by lowering the reported pixel count of the /// buffer without touching the strides. /// /// In very special circumstances you *may* do the opposite. This is **VERY** dangerous but not /// directly memory unsafe although that will likely alias pixels. One scenario is when you /// want to construct an `Rgba` image but have only 3 bytes per pixel and for some reason don't /// care about the value of the alpha channel even though you need `Rgba`. ChannelCountMismatch(u8, u8), /// Deprecated - `ChannelCountMismatch` is used instead WrongColor(ColorType), } /// Different normal forms of buffers. /// /// A normal form is an unaliased buffer with some additional constraints. The `ÌmageBuffer` uses /// row major form with packed samples. #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] pub enum NormalForm { /// No pixel aliases another. /// /// Unaliased also guarantees that all index calculations in the image bounds using /// `dim_index*dim_stride` (such as `x*width_stride + y*height_stride`) do not overflow. Unaliased, /// At least pixels are packed. /// /// Images of these types can wrap `[T]`-slices into the standard color types. This is a /// precondition for `GenericImage` which requires by-reference access to pixels. PixelPacked, /// All samples are packed. /// /// This is orthogonal to `PixelPacked`. It requires that there are no holes in the image but /// it is not necessary that the pixel samples themselves are adjacent. An example of this /// behaviour is a planar image layout. ImagePacked, /// The samples are in row-major form and all samples are packed. /// /// In addition to `PixelPacked` and `ImagePacked` this also asserts that the pixel matrix is /// in row-major form. RowMajorPacked, /// The samples are in column-major form and all samples are packed. /// /// In addition to `PixelPacked` and `ImagePacked` this also asserts that the pixel matrix is /// in column-major form. ColumnMajorPacked, } impl View where Buffer: AsRef<[P::Subpixel]>, { /// Take out the sample buffer. /// /// Gives up the normalization invariants on the buffer format. pub fn into_inner(self) -> FlatSamples { self.inner } /// Get a reference on the inner sample descriptor. /// /// There is no mutable counterpart as modifying the buffer format, including strides and /// lengths, could invalidate the accessibility invariants of the `View`. It is not specified /// if the inner buffer is the same as the buffer of the image from which this view was /// created. It might have been truncated as an optimization. pub fn flat(&self) -> &FlatSamples { &self.inner } /// Get a reference on the inner buffer. /// /// There is no mutable counter part since it is not intended to allow you to reassign the /// buffer or otherwise change its size or properties. pub fn samples(&self) -> &Buffer { &self.inner.samples } /// Get a reference to a selected subpixel if it is in-bounds. /// /// This method will return `None` when the sample is out-of-bounds. All errors that could /// occur due to overflow have been eliminated while construction the `View`. pub fn get_sample(&self, channel: u8, x: u32, y: u32) -> Option<&P::Subpixel> { if !self.inner.in_bounds(channel, x, y) { return None; } let index = self.inner.in_bounds_index(channel, x, y); // Should always be `Some(_)` but checking is more costly. self.samples().as_ref().get(index) } /// Get a mutable reference to a selected subpixel if it is in-bounds. /// /// This is relevant only when constructed with `FlatSamples::as_view_with_mut_samples`. This /// method will return `None` when the sample is out-of-bounds. All errors that could occur due /// to overflow have been eliminated while construction the `View`. /// /// **WARNING**: Note that of course samples may alias, so that the mutable reference returned /// here can in fact modify more than the coordinate in the argument. pub fn get_mut_sample(&mut self, channel: u8, x: u32, y: u32) -> Option<&mut P::Subpixel> where Buffer: AsMut<[P::Subpixel]>, { if !self.inner.in_bounds(channel, x, y) { return None; } let index = self.inner.in_bounds_index(channel, x, y); // Should always be `Some(_)` but checking is more costly. self.inner.samples.as_mut().get_mut(index) } /// Get the minimum length of a buffer such that all in-bounds samples have valid indices. /// /// See `FlatSamples::min_length`. This method will always succeed. pub fn min_length(&self) -> usize { self.inner.min_length().unwrap() } /// Return the portion of the buffer that holds sample values. /// /// While this can not fail–the validity of all coordinates has been validated during the /// conversion from `FlatSamples`–the resulting slice may still contain holes. pub fn image_slice(&self) -> &[P::Subpixel] { &self.samples().as_ref()[..self.min_length()] } /// Return the mutable portion of the buffer that holds sample values. /// /// This is relevant only when constructed with `FlatSamples::as_view_with_mut_samples`. While /// this can not fail–the validity of all coordinates has been validated during the conversion /// from `FlatSamples`–the resulting slice may still contain holes. pub fn image_mut_slice(&mut self) -> &mut [P::Subpixel] where Buffer: AsMut<[P::Subpixel]>, { let min_length = self.min_length(); &mut self.inner.samples.as_mut()[..min_length] } /// Shrink the inner image. /// /// The new dimensions will be the minimum of the previous dimensions. Since the set of /// in-bounds pixels afterwards is a subset of the current ones, this is allowed on a `View`. /// Note that you can not change the number of channels as an intrinsic property of `P`. pub fn shrink_to(&mut self, width: u32, height: u32) { let channels = self.inner.layout.channels; self.inner.shrink_to(channels, width, height); } /// Try to convert this into an image with mutable pixels. /// /// The resulting image implements `GenericImage` in addition to `GenericImageView`. While this /// has mutable samples, it does not enforce that pixel can not alias and that samples are /// packed enough for a mutable pixel reference. This is slightly cheaper than the chain /// `self.into_inner().as_view_mut()` and keeps the `View` alive on failure. /// /// ``` /// # use image::RgbImage; /// # use image::Rgb; /// let mut buffer = RgbImage::new(480, 640).into_flat_samples(); /// let view = buffer.as_view_with_mut_samples::>().unwrap(); /// /// // Inspect some pixels, … /// /// // Doesn't fail because it was originally an `RgbImage`. /// let view_mut = view.try_upgrade().unwrap(); /// ``` pub fn try_upgrade(self) -> Result, (Error, Self)> where Buffer: AsMut<[P::Subpixel]>, { if !self.inner.is_normal(NormalForm::PixelPacked) { return Err((Error::NormalFormRequired(NormalForm::PixelPacked), self)); } // No length check or channel count check required, all the same. Ok(ViewMut { inner: self.inner, phantom: PhantomData, }) } } impl ViewMut where Buffer: AsMut<[P::Subpixel]>, { /// Take out the sample buffer. /// /// Gives up the normalization invariants on the buffer format. pub fn into_inner(self) -> FlatSamples { self.inner } /// Get a reference on the sample buffer descriptor. /// /// There is no mutable counterpart as modifying the buffer format, including strides and /// lengths, could invalidate the accessibility invariants of the `View`. It is not specified /// if the inner buffer is the same as the buffer of the image from which this view was /// created. It might have been truncated as an optimization. pub fn flat(&self) -> &FlatSamples { &self.inner } /// Get a reference on the inner buffer. /// /// There is no mutable counter part since it is not intended to allow you to reassign the /// buffer or otherwise change its size or properties. However, its contents can be accessed /// mutable through a slice with `image_mut_slice`. pub fn samples(&self) -> &Buffer { &self.inner.samples } /// Get the minimum length of a buffer such that all in-bounds samples have valid indices. /// /// See `FlatSamples::min_length`. This method will always succeed. pub fn min_length(&self) -> usize { self.inner.min_length().unwrap() } /// Get a reference to a selected subpixel. /// /// This method will return `None` when the sample is out-of-bounds. All errors that could /// occur due to overflow have been eliminated while construction the `View`. pub fn get_sample(&self, channel: u8, x: u32, y: u32) -> Option<&P::Subpixel> where Buffer: AsRef<[P::Subpixel]>, { if !self.inner.in_bounds(channel, x, y) { return None; } let index = self.inner.in_bounds_index(channel, x, y); // Should always be `Some(_)` but checking is more costly. self.samples().as_ref().get(index) } /// Get a mutable reference to a selected sample. /// /// This method will return `None` when the sample is out-of-bounds. All errors that could /// occur due to overflow have been eliminated while construction the `View`. pub fn get_mut_sample(&mut self, channel: u8, x: u32, y: u32) -> Option<&mut P::Subpixel> { if !self.inner.in_bounds(channel, x, y) { return None; } let index = self.inner.in_bounds_index(channel, x, y); // Should always be `Some(_)` but checking is more costly. self.inner.samples.as_mut().get_mut(index) } /// Return the portion of the buffer that holds sample values. /// /// While this can not fail–the validity of all coordinates has been validated during the /// conversion from `FlatSamples`–the resulting slice may still contain holes. pub fn image_slice(&self) -> &[P::Subpixel] where Buffer: AsRef<[P::Subpixel]>, { &self.inner.samples.as_ref()[..self.min_length()] } /// Return the mutable buffer that holds sample values. pub fn image_mut_slice(&mut self) -> &mut [P::Subpixel] { let length = self.min_length(); &mut self.inner.samples.as_mut()[..length] } /// Shrink the inner image. /// /// The new dimensions will be the minimum of the previous dimensions. Since the set of /// in-bounds pixels afterwards is a subset of the current ones, this is allowed on a `View`. /// Note that you can not change the number of channels as an intrinsic property of `P`. pub fn shrink_to(&mut self, width: u32, height: u32) { let channels = self.inner.layout.channels; self.inner.shrink_to(channels, width, height); } } // The out-of-bounds panic for single sample access similar to `slice::index`. #[inline(never)] #[cold] fn panic_cwh_out_of_bounds( (c, x, y): (u8, u32, u32), bounds: (u8, u32, u32), strides: (usize, usize, usize), ) -> ! { panic!( "Sample coordinates {:?} out of sample matrix bounds {:?} with strides {:?}", (c, x, y), bounds, strides ) } // The out-of-bounds panic for pixel access similar to `slice::index`. #[inline(never)] #[cold] fn panic_pixel_out_of_bounds((x, y): (u32, u32), bounds: (u32, u32)) -> ! { panic!("Image index {:?} out of bounds {:?}", (x, y), bounds) } impl Index<(u8, u32, u32)> for FlatSamples where Buffer: Index, { type Output = Buffer::Output; /// Return a reference to a single sample at specified coordinates. /// /// # Panics /// /// When the coordinates are out of bounds or the index calculation fails. fn index(&self, (c, x, y): (u8, u32, u32)) -> &Self::Output { let bounds = self.bounds(); let strides = self.strides_cwh(); let index = self .index(c, x, y) .unwrap_or_else(|| panic_cwh_out_of_bounds((c, x, y), bounds, strides)); &self.samples[index] } } impl IndexMut<(u8, u32, u32)> for FlatSamples where Buffer: IndexMut, { /// Return a mutable reference to a single sample at specified coordinates. /// /// # Panics /// /// When the coordinates are out of bounds or the index calculation fails. fn index_mut(&mut self, (c, x, y): (u8, u32, u32)) -> &mut Self::Output { let bounds = self.bounds(); let strides = self.strides_cwh(); let index = self .index(c, x, y) .unwrap_or_else(|| panic_cwh_out_of_bounds((c, x, y), bounds, strides)); &mut self.samples[index] } } impl GenericImageView for View where Buffer: AsRef<[P::Subpixel]>, { type Pixel = P; fn dimensions(&self) -> (u32, u32) { (self.inner.layout.width, self.inner.layout.height) } fn get_pixel(&self, x: u32, y: u32) -> Self::Pixel { if !self.inner.in_bounds(0, x, y) { panic_pixel_out_of_bounds((x, y), self.dimensions()) } let image = self.inner.samples.as_ref(); let base_index = self.inner.in_bounds_index(0, x, y); let channels = P::CHANNEL_COUNT as usize; let mut buffer = [Zero::zero(); 256]; buffer .iter_mut() .enumerate() .take(channels) .for_each(|(c, to)| { let index = base_index + c * self.inner.layout.channel_stride; *to = image[index]; }); *P::from_slice(&buffer[..channels]) } } impl GenericImageView for ViewMut where Buffer: AsMut<[P::Subpixel]> + AsRef<[P::Subpixel]>, { type Pixel = P; fn dimensions(&self) -> (u32, u32) { (self.inner.layout.width, self.inner.layout.height) } fn get_pixel(&self, x: u32, y: u32) -> Self::Pixel { if !self.inner.in_bounds(0, x, y) { panic_pixel_out_of_bounds((x, y), self.dimensions()) } let image = self.inner.samples.as_ref(); let base_index = self.inner.in_bounds_index(0, x, y); let channels = P::CHANNEL_COUNT as usize; let mut buffer = [Zero::zero(); 256]; buffer .iter_mut() .enumerate() .take(channels) .for_each(|(c, to)| { let index = base_index + c * self.inner.layout.channel_stride; *to = image[index]; }); *P::from_slice(&buffer[..channels]) } } impl GenericImage for ViewMut where Buffer: AsMut<[P::Subpixel]> + AsRef<[P::Subpixel]>, { fn get_pixel_mut(&mut self, x: u32, y: u32) -> &mut Self::Pixel { if !self.inner.in_bounds(0, x, y) { panic_pixel_out_of_bounds((x, y), self.dimensions()) } let base_index = self.inner.in_bounds_index(0, x, y); let channel_count =

::CHANNEL_COUNT as usize; let pixel_range = base_index..base_index + channel_count; P::from_slice_mut(&mut self.inner.samples.as_mut()[pixel_range]) } #[allow(deprecated)] fn put_pixel(&mut self, x: u32, y: u32, pixel: Self::Pixel) { *self.get_pixel_mut(x, y) = pixel; } #[allow(deprecated)] fn blend_pixel(&mut self, x: u32, y: u32, pixel: Self::Pixel) { self.get_pixel_mut(x, y).blend(&pixel); } } impl From for ImageError { fn from(error: Error) -> ImageError { #[derive(Debug)] struct NormalFormRequiredError(NormalForm); impl fmt::Display for NormalFormRequiredError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "Required sample buffer in normal form {:?}", self.0) } } impl error::Error for NormalFormRequiredError {} match error { Error::TooLarge => ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, )), Error::NormalFormRequired(form) => ImageError::Decoding(DecodingError::new( ImageFormatHint::Unknown, NormalFormRequiredError(form), )), Error::ChannelCountMismatch(_lc, _pc) => ImageError::Parameter( ParameterError::from_kind(ParameterErrorKind::DimensionMismatch), ), Error::WrongColor(color) => { ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormatHint::Unknown, UnsupportedErrorKind::Color(color.into()), )) } } } } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { Error::TooLarge => write!(f, "The layout is too large"), Error::NormalFormRequired(form) => write!( f, "The layout needs to {}", match form { NormalForm::ColumnMajorPacked => "be packed and in column major form", NormalForm::ImagePacked => "be fully packed", NormalForm::PixelPacked => "have packed pixels", NormalForm::RowMajorPacked => "be packed and in row major form", NormalForm::Unaliased => "not have any aliasing channels", } ), Error::ChannelCountMismatch(layout_channels, pixel_channels) => { write!(f, "The channel count of the chosen pixel (={pixel_channels}) does agree with the layout (={layout_channels})") } Error::WrongColor(color) => { write!(f, "The chosen color type does not match the hint {color:?}") } } } } impl error::Error for Error {} impl PartialOrd for NormalForm { /// Compares the logical preconditions. /// /// `a < b` if the normal form `a` has less preconditions than `b`. fn partial_cmp(&self, other: &Self) -> Option { match (*self, *other) { (NormalForm::Unaliased, NormalForm::Unaliased) => Some(cmp::Ordering::Equal), (NormalForm::PixelPacked, NormalForm::PixelPacked) => Some(cmp::Ordering::Equal), (NormalForm::ImagePacked, NormalForm::ImagePacked) => Some(cmp::Ordering::Equal), (NormalForm::RowMajorPacked, NormalForm::RowMajorPacked) => Some(cmp::Ordering::Equal), (NormalForm::ColumnMajorPacked, NormalForm::ColumnMajorPacked) => { Some(cmp::Ordering::Equal) } (NormalForm::Unaliased, _) => Some(cmp::Ordering::Less), (_, NormalForm::Unaliased) => Some(cmp::Ordering::Greater), (NormalForm::PixelPacked, NormalForm::ColumnMajorPacked) => Some(cmp::Ordering::Less), (NormalForm::PixelPacked, NormalForm::RowMajorPacked) => Some(cmp::Ordering::Less), (NormalForm::RowMajorPacked, NormalForm::PixelPacked) => Some(cmp::Ordering::Greater), (NormalForm::ColumnMajorPacked, NormalForm::PixelPacked) => { Some(cmp::Ordering::Greater) } (NormalForm::ImagePacked, NormalForm::ColumnMajorPacked) => Some(cmp::Ordering::Less), (NormalForm::ImagePacked, NormalForm::RowMajorPacked) => Some(cmp::Ordering::Less), (NormalForm::RowMajorPacked, NormalForm::ImagePacked) => Some(cmp::Ordering::Greater), (NormalForm::ColumnMajorPacked, NormalForm::ImagePacked) => { Some(cmp::Ordering::Greater) } (NormalForm::ImagePacked, NormalForm::PixelPacked) => None, (NormalForm::PixelPacked, NormalForm::ImagePacked) => None, (NormalForm::RowMajorPacked, NormalForm::ColumnMajorPacked) => None, (NormalForm::ColumnMajorPacked, NormalForm::RowMajorPacked) => None, } } } #[cfg(test)] mod tests { use super::*; use crate::buffer_::GrayAlphaImage; use crate::color::{LumaA, Rgb}; #[test] fn aliasing_view() { let buffer = FlatSamples { samples: &[42], layout: SampleLayout { channels: 3, channel_stride: 0, width: 100, width_stride: 0, height: 100, height_stride: 0, }, color_hint: None, }; let view = buffer.as_view::>().expect("This is a valid view"); let pixel_count = view .pixels() .inspect(|pixel| assert!(pixel.2 == Rgb([42, 42, 42]))) .count(); assert_eq!(pixel_count, 100 * 100); } #[test] fn mutable_view() { let mut buffer = FlatSamples { samples: [0; 18], layout: SampleLayout { channels: 2, channel_stride: 1, width: 3, width_stride: 2, height: 3, height_stride: 6, }, color_hint: None, }; { let mut view = buffer .as_view_mut::>() .expect("This should be a valid mutable buffer"); assert_eq!(view.dimensions(), (3, 3)); #[allow(deprecated)] for i in 0..9 { *view.get_pixel_mut(i % 3, i / 3) = LumaA([2 * i as u16, 2 * i as u16 + 1]); } } buffer .samples .iter() .enumerate() .for_each(|(idx, sample)| assert_eq!(idx, *sample as usize)); } #[test] fn normal_forms() { assert!(FlatSamples { samples: [0u8; 0], layout: SampleLayout { channels: 2, channel_stride: 1, width: 3, width_stride: 9, height: 3, height_stride: 28, }, color_hint: None, } .is_normal(NormalForm::PixelPacked)); assert!(FlatSamples { samples: [0u8; 0], layout: SampleLayout { channels: 2, channel_stride: 8, width: 4, width_stride: 1, height: 2, height_stride: 4, }, color_hint: None, } .is_normal(NormalForm::ImagePacked)); assert!(FlatSamples { samples: [0u8; 0], layout: SampleLayout { channels: 2, channel_stride: 1, width: 4, width_stride: 2, height: 2, height_stride: 8, }, color_hint: None, } .is_normal(NormalForm::RowMajorPacked)); assert!(FlatSamples { samples: [0u8; 0], layout: SampleLayout { channels: 2, channel_stride: 1, width: 4, width_stride: 4, height: 2, height_stride: 2, }, color_hint: None, } .is_normal(NormalForm::ColumnMajorPacked)); } #[test] fn image_buffer_conversion() { let expected_layout = SampleLayout { channels: 2, channel_stride: 1, width: 4, width_stride: 2, height: 2, height_stride: 8, }; let initial = GrayAlphaImage::new(expected_layout.width, expected_layout.height); let buffer = initial.into_flat_samples(); assert_eq!(buffer.layout, expected_layout); let _: GrayAlphaImage = buffer.try_into_buffer().unwrap_or_else(|(error, _)| { panic!("Expected buffer to be convertible but {:?}", error) }); } } image-0.25.5/src/image.rs000064400000000000000000001757711046102023000132400ustar 00000000000000#![allow(clippy::too_many_arguments)] use std::ffi::OsStr; use std::io::{self, Write}; use std::mem::size_of; use std::ops::{Deref, DerefMut}; use std::path::Path; #[cfg(feature = "serde")] use serde::{Deserialize, Serialize}; use crate::color::{ColorType, ExtendedColorType}; use crate::error::{ ImageError, ImageFormatHint, ImageResult, LimitError, LimitErrorKind, ParameterError, ParameterErrorKind, UnsupportedError, UnsupportedErrorKind, }; use crate::math::Rect; use crate::metadata::Orientation; use crate::traits::Pixel; use crate::ImageBuffer; use crate::animation::Frames; /// An enumeration of supported image formats. /// Not all formats support both encoding and decoding. #[derive(Clone, Copy, PartialEq, Eq, Debug, Hash)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[non_exhaustive] pub enum ImageFormat { /// An Image in PNG Format Png, /// An Image in JPEG Format Jpeg, /// An Image in GIF Format Gif, /// An Image in WEBP Format WebP, /// An Image in general PNM Format Pnm, /// An Image in TIFF Format Tiff, /// An Image in TGA Format Tga, /// An Image in DDS Format Dds, /// An Image in BMP Format Bmp, /// An Image in ICO Format Ico, /// An Image in Radiance HDR Format Hdr, /// An Image in OpenEXR Format OpenExr, /// An Image in farbfeld Format Farbfeld, /// An Image in AVIF Format Avif, /// An Image in QOI Format Qoi, /// An Image in PCX Format Pcx, } impl ImageFormat { /// Return the image format specified by a path's file extension. /// /// # Example /// /// ``` /// use image::ImageFormat; /// /// let format = ImageFormat::from_extension("jpg"); /// assert_eq!(format, Some(ImageFormat::Jpeg)); /// ``` #[inline] pub fn from_extension(ext: S) -> Option where S: AsRef, { // thin wrapper function to strip generics fn inner(ext: &OsStr) -> Option { let ext = ext.to_str()?.to_ascii_lowercase(); Some(match ext.as_str() { "avif" => ImageFormat::Avif, "jpg" | "jpeg" | "jfif" => ImageFormat::Jpeg, "png" | "apng" => ImageFormat::Png, "gif" => ImageFormat::Gif, "webp" => ImageFormat::WebP, "tif" | "tiff" => ImageFormat::Tiff, "tga" => ImageFormat::Tga, "dds" => ImageFormat::Dds, "bmp" => ImageFormat::Bmp, "ico" => ImageFormat::Ico, "hdr" => ImageFormat::Hdr, "exr" => ImageFormat::OpenExr, "pbm" | "pam" | "ppm" | "pgm" => ImageFormat::Pnm, "ff" => ImageFormat::Farbfeld, "qoi" => ImageFormat::Qoi, "pcx" => ImageFormat::Pcx, _ => return None, }) } inner(ext.as_ref()) } /// Return the image format specified by the path's file extension. /// /// # Example /// /// ``` /// use image::ImageFormat; /// /// let format = ImageFormat::from_path("images/ferris.png")?; /// assert_eq!(format, ImageFormat::Png); /// /// # Ok::<(), image::error::ImageError>(()) /// ``` #[inline] pub fn from_path

(path: P) -> ImageResult where P: AsRef, { // thin wrapper function to strip generics fn inner(path: &Path) -> ImageResult { let exact_ext = path.extension(); exact_ext .and_then(ImageFormat::from_extension) .ok_or_else(|| { let format_hint = match exact_ext { None => ImageFormatHint::Unknown, Some(os) => ImageFormatHint::PathExtension(os.into()), }; ImageError::Unsupported(format_hint.into()) }) } inner(path.as_ref()) } /// Return the image format specified by a MIME type. /// /// # Example /// /// ``` /// use image::ImageFormat; /// /// let format = ImageFormat::from_mime_type("image/png").unwrap(); /// assert_eq!(format, ImageFormat::Png); /// ``` pub fn from_mime_type(mime_type: M) -> Option where M: AsRef, { match mime_type.as_ref() { "image/avif" => Some(ImageFormat::Avif), "image/jpeg" => Some(ImageFormat::Jpeg), "image/png" => Some(ImageFormat::Png), "image/gif" => Some(ImageFormat::Gif), "image/webp" => Some(ImageFormat::WebP), "image/tiff" => Some(ImageFormat::Tiff), "image/x-targa" | "image/x-tga" => Some(ImageFormat::Tga), "image/vnd-ms.dds" => Some(ImageFormat::Dds), "image/bmp" => Some(ImageFormat::Bmp), "image/x-icon" => Some(ImageFormat::Ico), "image/vnd.radiance" => Some(ImageFormat::Hdr), "image/x-exr" => Some(ImageFormat::OpenExr), "image/x-portable-bitmap" | "image/x-portable-graymap" | "image/x-portable-pixmap" | "image/x-portable-anymap" => Some(ImageFormat::Pnm), // Qoi's MIME type is being worked on. // See: https://github.com/phoboslab/qoi/issues/167 "image/x-qoi" => Some(ImageFormat::Qoi), "image/vnd.zbrush.pcx" | "image/x-pcx" => Some(ImageFormat::Pcx), _ => None, } } /// Return the MIME type for this image format or "application/octet-stream" if no MIME type /// exists for the format. /// /// Some notes on a few of the MIME types: /// /// - The portable anymap format has a separate MIME type for the pixmap, graymap and bitmap /// formats, but this method returns the general "image/x-portable-anymap" MIME type. /// - The Targa format has two common MIME types, "image/x-targa" and "image/x-tga"; this /// method returns "image/x-targa" for that format. /// - The QOI MIME type is still a work in progress. This method returns "image/x-qoi" for /// that format. /// /// # Example /// /// ``` /// use image::ImageFormat; /// /// let mime_type = ImageFormat::Png.to_mime_type(); /// assert_eq!(mime_type, "image/png"); /// ``` #[must_use] pub fn to_mime_type(&self) -> &'static str { match self { ImageFormat::Avif => "image/avif", ImageFormat::Jpeg => "image/jpeg", ImageFormat::Png => "image/png", ImageFormat::Gif => "image/gif", ImageFormat::WebP => "image/webp", ImageFormat::Tiff => "image/tiff", // the targa MIME type has two options, but this one seems to be used more ImageFormat::Tga => "image/x-targa", ImageFormat::Dds => "image/vnd-ms.dds", ImageFormat::Bmp => "image/bmp", ImageFormat::Ico => "image/x-icon", ImageFormat::Hdr => "image/vnd.radiance", ImageFormat::OpenExr => "image/x-exr", // return the most general MIME type ImageFormat::Pnm => "image/x-portable-anymap", // Qoi's MIME type is being worked on. // See: https://github.com/phoboslab/qoi/issues/167 ImageFormat::Qoi => "image/x-qoi", // farbfeld's MIME type taken from https://www.wikidata.org/wiki/Q28206109 ImageFormat::Farbfeld => "application/octet-stream", ImageFormat::Pcx => "image/vnd.zbrush.pcx", } } /// Return if the `ImageFormat` can be decoded by the lib. #[inline] #[must_use] pub fn can_read(&self) -> bool { // Needs to be updated once a new variant's decoder is added to free_functions.rs::load match self { ImageFormat::Png => true, ImageFormat::Gif => true, ImageFormat::Jpeg => true, ImageFormat::WebP => true, ImageFormat::Tiff => true, ImageFormat::Tga => true, ImageFormat::Dds => false, ImageFormat::Bmp => true, ImageFormat::Ico => true, ImageFormat::Hdr => true, ImageFormat::OpenExr => true, ImageFormat::Pnm => true, ImageFormat::Farbfeld => true, ImageFormat::Avif => true, ImageFormat::Qoi => true, ImageFormat::Pcx => true, } } /// Return if the `ImageFormat` can be encoded by the lib. #[inline] #[must_use] pub fn can_write(&self) -> bool { // Needs to be updated once a new variant's encoder is added to free_functions.rs::save_buffer_with_format_impl match self { ImageFormat::Gif => true, ImageFormat::Ico => true, ImageFormat::Jpeg => true, ImageFormat::Png => true, ImageFormat::Bmp => true, ImageFormat::Tiff => true, ImageFormat::Tga => true, ImageFormat::Pnm => true, ImageFormat::Farbfeld => true, ImageFormat::Avif => true, ImageFormat::WebP => true, ImageFormat::Hdr => true, ImageFormat::OpenExr => true, ImageFormat::Dds => false, ImageFormat::Qoi => true, ImageFormat::Pcx => false, } } /// Return a list of applicable extensions for this format. /// /// All currently recognized image formats specify at least on extension but for future /// compatibility you should not rely on this fact. The list may be empty if the format has no /// recognized file representation, for example in case it is used as a purely transient memory /// format. /// /// The method name `extensions` remains reserved for introducing another method in the future /// that yields a slice of `OsStr` which is blocked by several features of const evaluation. #[must_use] pub fn extensions_str(self) -> &'static [&'static str] { match self { ImageFormat::Png => &["png"], ImageFormat::Jpeg => &["jpg", "jpeg"], ImageFormat::Gif => &["gif"], ImageFormat::WebP => &["webp"], ImageFormat::Pnm => &["pbm", "pam", "ppm", "pgm"], ImageFormat::Tiff => &["tiff", "tif"], ImageFormat::Tga => &["tga"], ImageFormat::Dds => &["dds"], ImageFormat::Bmp => &["bmp"], ImageFormat::Ico => &["ico"], ImageFormat::Hdr => &["hdr"], ImageFormat::OpenExr => &["exr"], ImageFormat::Farbfeld => &["ff"], // According to: https://aomediacodec.github.io/av1-avif/#mime-registration ImageFormat::Avif => &["avif"], ImageFormat::Qoi => &["qoi"], ImageFormat::Pcx => &["pcx"], } } /// Return the `ImageFormat`s which are enabled for reading. #[inline] #[must_use] pub fn reading_enabled(&self) -> bool { match self { ImageFormat::Png => cfg!(feature = "png"), ImageFormat::Gif => cfg!(feature = "gif"), ImageFormat::Jpeg => cfg!(feature = "jpeg"), ImageFormat::WebP => cfg!(feature = "webp"), ImageFormat::Tiff => cfg!(feature = "tiff"), ImageFormat::Tga => cfg!(feature = "tga"), ImageFormat::Bmp => cfg!(feature = "bmp"), ImageFormat::Ico => cfg!(feature = "ico"), ImageFormat::Hdr => cfg!(feature = "hdr"), ImageFormat::OpenExr => cfg!(feature = "exr"), ImageFormat::Pnm => cfg!(feature = "pnm"), ImageFormat::Farbfeld => cfg!(feature = "ff"), ImageFormat::Avif => cfg!(feature = "avif"), ImageFormat::Qoi => cfg!(feature = "qoi"), ImageFormat::Pcx => cfg!(feature = "pcx"), ImageFormat::Dds => false, } } /// Return the `ImageFormat`s which are enabled for writing. #[inline] #[must_use] pub fn writing_enabled(&self) -> bool { match self { ImageFormat::Gif => cfg!(feature = "gif"), ImageFormat::Ico => cfg!(feature = "ico"), ImageFormat::Jpeg => cfg!(feature = "jpeg"), ImageFormat::Png => cfg!(feature = "png"), ImageFormat::Bmp => cfg!(feature = "bmp"), ImageFormat::Tiff => cfg!(feature = "tiff"), ImageFormat::Tga => cfg!(feature = "tga"), ImageFormat::Pnm => cfg!(feature = "pnm"), ImageFormat::Farbfeld => cfg!(feature = "ff"), ImageFormat::Avif => cfg!(feature = "avif"), ImageFormat::WebP => cfg!(feature = "webp"), ImageFormat::OpenExr => cfg!(feature = "exr"), ImageFormat::Qoi => cfg!(feature = "qoi"), ImageFormat::Hdr => cfg!(feature = "hdr"), ImageFormat::Pcx => false, ImageFormat::Dds => false, } } /// Return all `ImageFormat`s pub fn all() -> impl Iterator { [ ImageFormat::Gif, ImageFormat::Ico, ImageFormat::Jpeg, ImageFormat::Png, ImageFormat::Bmp, ImageFormat::Tiff, ImageFormat::Tga, ImageFormat::Pnm, ImageFormat::Farbfeld, ImageFormat::Avif, ImageFormat::WebP, ImageFormat::OpenExr, ImageFormat::Qoi, ImageFormat::Dds, ImageFormat::Hdr, ImageFormat::Pcx, ] .iter() .copied() } } // This struct manages buffering associated with implementing `Read` and `Seek` on decoders that can // must decode ranges of bytes at a time. #[allow(dead_code)] // When no image formats that use it are enabled pub(crate) struct ImageReadBuffer { scanline_bytes: usize, buffer: Vec, consumed: usize, total_bytes: u64, offset: u64, } impl ImageReadBuffer { /// Create a new `ImageReadBuffer`. /// /// Panics if `scanline_bytes` doesn't fit into a usize, because that would mean reading anything /// from the image would take more RAM than the entire virtual address space. In other words, /// actually using this struct would instantly OOM so just get it out of the way now. #[allow(dead_code)] // When no image formats that use it are enabled pub(crate) fn new(scanline_bytes: u64, total_bytes: u64) -> Self { Self { scanline_bytes: usize::try_from(scanline_bytes).unwrap(), buffer: Vec::new(), consumed: 0, total_bytes, offset: 0, } } #[allow(dead_code)] // When no image formats that use it are enabled pub(crate) fn read(&mut self, buf: &mut [u8], mut read_scanline: F) -> io::Result where F: FnMut(&mut [u8]) -> io::Result, { if self.buffer.len() == self.consumed { if self.offset == self.total_bytes { return Ok(0); } else if buf.len() >= self.scanline_bytes { // If there is nothing buffered and the user requested a full scanline worth of // data, skip buffering. let bytes_read = read_scanline(&mut buf[..self.scanline_bytes])?; self.offset += u64::try_from(bytes_read).unwrap(); return Ok(bytes_read); } else { // Lazily allocate buffer the first time that read is called with a buffer smaller // than the scanline size. if self.buffer.is_empty() { self.buffer.resize(self.scanline_bytes, 0); } self.consumed = 0; let bytes_read = read_scanline(&mut self.buffer[..])?; self.buffer.resize(bytes_read, 0); self.offset += u64::try_from(bytes_read).unwrap(); assert!(bytes_read == self.scanline_bytes || self.offset == self.total_bytes); } } // Finally, copy bytes into output buffer. let bytes_buffered = self.buffer.len() - self.consumed; if bytes_buffered > buf.len() { buf.copy_from_slice(&self.buffer[self.consumed..][..buf.len()]); self.consumed += buf.len(); Ok(buf.len()) } else { buf[..bytes_buffered].copy_from_slice(&self.buffer[self.consumed..][..bytes_buffered]); self.consumed = self.buffer.len(); Ok(bytes_buffered) } } } /// Decodes a specific region of the image, represented by the rectangle /// starting from ```x``` and ```y``` and having ```length``` and ```width``` #[allow(dead_code)] // When no image formats that use it are enabled pub(crate) fn load_rect( x: u32, y: u32, width: u32, height: u32, buf: &mut [u8], row_pitch: usize, decoder: &mut D, scanline_bytes: usize, mut seek_scanline: F1, mut read_scanline: F2, ) -> ImageResult<()> where D: ImageDecoder, F1: FnMut(&mut D, u64) -> io::Result<()>, F2: FnMut(&mut D, &mut [u8]) -> Result<(), E>, ImageError: From, { let scanline_bytes = u64::try_from(scanline_bytes).unwrap(); let row_pitch = u64::try_from(row_pitch).unwrap(); let (x, y, width, height) = ( u64::from(x), u64::from(y), u64::from(width), u64::from(height), ); let dimensions = decoder.dimensions(); let bytes_per_pixel = u64::from(decoder.color_type().bytes_per_pixel()); let row_bytes = bytes_per_pixel * u64::from(dimensions.0); let total_bytes = width * height * bytes_per_pixel; assert!( buf.len() >= usize::try_from(total_bytes).unwrap_or(usize::MAX), "output buffer too short\n expected `{}`, provided `{}`", total_bytes, buf.len() ); let mut current_scanline = 0; let mut tmp = Vec::new(); let mut tmp_scanline = None; { // Read a range of the image starting from byte number `start` and continuing until byte // number `end`. Updates `current_scanline` and `bytes_read` appropriately. let mut read_image_range = |mut start: u64, end: u64, mut output: &mut [u8]| -> ImageResult<()> { // If the first scanline we need is already stored in the temporary buffer, then handle // it first. let target_scanline = start / scanline_bytes; if tmp_scanline == Some(target_scanline) { let position = target_scanline * scanline_bytes; let offset = start.saturating_sub(position); let len = (end - start) .min(scanline_bytes - offset) .min(end - position); output .write_all(&tmp[offset as usize..][..len as usize]) .unwrap(); start += len; if start == end { return Ok(()); } } let target_scanline = start / scanline_bytes; if target_scanline != current_scanline { seek_scanline(decoder, target_scanline)?; current_scanline = target_scanline; } let mut position = current_scanline * scanline_bytes; while position < end { if position >= start && end - position >= scanline_bytes { read_scanline(decoder, &mut output[..(scanline_bytes as usize)])?; output = &mut output[scanline_bytes as usize..]; } else { tmp.resize(scanline_bytes as usize, 0u8); read_scanline(decoder, &mut tmp)?; tmp_scanline = Some(current_scanline); let offset = start.saturating_sub(position); let len = (end - start) .min(scanline_bytes - offset) .min(end - position); output .write_all(&tmp[offset as usize..][..len as usize]) .unwrap(); } current_scanline += 1; position += scanline_bytes; } Ok(()) }; if x + width > u64::from(dimensions.0) || y + height > u64::from(dimensions.1) || width == 0 || height == 0 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } if scanline_bytes > usize::MAX as u64 { return Err(ImageError::Limits(LimitError::from_kind( LimitErrorKind::InsufficientMemory, ))); } if x == 0 && width == u64::from(dimensions.0) && row_pitch == row_bytes { let start = x * bytes_per_pixel + y * row_bytes; let end = (x + width) * bytes_per_pixel + (y + height - 1) * row_bytes; read_image_range(start, end, buf)?; } else { for (output_slice, row) in buf.chunks_mut(row_pitch as usize).zip(y..(y + height)) { let start = x * bytes_per_pixel + row * row_bytes; let end = (x + width) * bytes_per_pixel + row * row_bytes; read_image_range(start, end, output_slice)?; } } } // Seek back to the start Ok(seek_scanline(decoder, 0)?) } /// Reads all of the bytes of a decoder into a Vec. No particular alignment /// of the output buffer is guaranteed. /// /// Panics if there isn't enough memory to decode the image. pub(crate) fn decoder_to_vec(decoder: impl ImageDecoder) -> ImageResult> where T: crate::traits::Primitive + bytemuck::Pod, { let total_bytes = usize::try_from(decoder.total_bytes()); if total_bytes.is_err() || total_bytes.unwrap() > isize::MAX as usize { return Err(ImageError::Limits(LimitError::from_kind( LimitErrorKind::InsufficientMemory, ))); } let mut buf = vec![num_traits::Zero::zero(); total_bytes.unwrap() / size_of::()]; decoder.read_image(bytemuck::cast_slice_mut(buf.as_mut_slice()))?; Ok(buf) } /// The trait that all decoders implement pub trait ImageDecoder { /// Returns a tuple containing the width and height of the image fn dimensions(&self) -> (u32, u32); /// Returns the color type of the image data produced by this decoder fn color_type(&self) -> ColorType; /// Returns the color type of the image file before decoding fn original_color_type(&self) -> ExtendedColorType { self.color_type().into() } /// Returns the ICC color profile embedded in the image, or `Ok(None)` if the image does not have one. /// /// For formats that don't support embedded profiles this function should always return `Ok(None)`. fn icc_profile(&mut self) -> ImageResult>> { Ok(None) } /// Returns the raw [Exif](https://en.wikipedia.org/wiki/Exif) chunk, if it is present. /// A third-party crate such as [`kamadak-exif`](https://docs.rs/kamadak-exif/) is required to actually parse it. /// /// For formats that don't support embedded profiles this function should always return `Ok(None)`. fn exif_metadata(&mut self) -> ImageResult>> { Ok(None) } /// Returns the orientation of the image. /// /// This is usually obtained from the Exif metadata, if present. Formats that don't support /// indicating orientation in their image metadata will return `Ok(Orientation::NoTransforms)`. fn orientation(&mut self) -> ImageResult { Ok(self .exif_metadata()? .and_then(|chunk| Orientation::from_exif_chunk(&chunk)) .unwrap_or(Orientation::NoTransforms)) } /// Returns the total number of bytes in the decoded image. /// /// This is the size of the buffer that must be passed to `read_image` or /// `read_image_with_progress`. The returned value may exceed `usize::MAX`, in /// which case it isn't actually possible to construct a buffer to decode all the image data /// into. If, however, the size does not fit in a u64 then `u64::MAX` is returned. fn total_bytes(&self) -> u64 { let dimensions = self.dimensions(); let total_pixels = u64::from(dimensions.0) * u64::from(dimensions.1); let bytes_per_pixel = u64::from(self.color_type().bytes_per_pixel()); total_pixels.saturating_mul(bytes_per_pixel) } /// Returns all the bytes in the image. /// /// This function takes a slice of bytes and writes the pixel data of the image into it. /// Although not required, for certain color types callers may want to pass buffers which are /// aligned to 2 or 4 byte boundaries to the slice can be cast to a [u16] or [u32]. To accommodate /// such casts, the returned contents will always be in native endian. /// /// # Panics /// /// This function panics if `buf.len() != self.total_bytes()`. /// /// # Examples /// /// ```no_build /// use zerocopy::{AsBytes, FromBytes}; /// fn read_16bit_image(decoder: impl ImageDecoder) -> Vec<16> { /// let mut buf: Vec = vec![0; decoder.total_bytes()/2]; /// decoder.read_image(buf.as_bytes()); /// buf /// } /// ``` fn read_image(self, buf: &mut [u8]) -> ImageResult<()> where Self: Sized; /// Set the decoder to have the specified limits. See [`Limits`] for the different kinds of /// limits that is possible to set. /// /// Note to implementors: make sure you call [`Limits::check_support`] so that /// decoding fails if any unsupported strict limits are set. Also make sure /// you call [`Limits::check_dimensions`] to check the `max_image_width` and /// `max_image_height` limits. /// /// [`Limits`]: ./io/struct.Limits.html /// [`Limits::check_support`]: ./io/struct.Limits.html#method.check_support /// [`Limits::check_dimensions`]: ./io/struct.Limits.html#method.check_dimensions fn set_limits(&mut self, limits: crate::Limits) -> ImageResult<()> { limits.check_support(&crate::LimitSupport::default())?; let (width, height) = self.dimensions(); limits.check_dimensions(width, height)?; Ok(()) } /// Use `read_image` instead; this method is an implementation detail needed so the trait can /// be object safe. /// /// Note to implementors: This method should be implemented by calling `read_image` on /// the boxed decoder... /// ```no_build /// fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { /// (*self).read_image(buf) /// } /// ``` fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()>; } impl ImageDecoder for Box { fn dimensions(&self) -> (u32, u32) { (**self).dimensions() } fn color_type(&self) -> ColorType { (**self).color_type() } fn original_color_type(&self) -> ExtendedColorType { (**self).original_color_type() } fn icc_profile(&mut self) -> ImageResult>> { (**self).icc_profile() } fn exif_metadata(&mut self) -> ImageResult>> { (**self).exif_metadata() } fn total_bytes(&self) -> u64 { (**self).total_bytes() } fn read_image(self, buf: &mut [u8]) -> ImageResult<()> where Self: Sized, { T::read_image_boxed(self, buf) } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { T::read_image_boxed(*self, buf) } fn set_limits(&mut self, limits: crate::Limits) -> ImageResult<()> { (**self).set_limits(limits) } } /// Specialized image decoding not be supported by all formats pub trait ImageDecoderRect: ImageDecoder { /// Decode a rectangular section of the image. /// /// This function takes a slice of bytes and writes the pixel data of the image into it. /// The rectangle is specified by the x and y coordinates of the top left corner, the width /// and height of the rectangle, and the row pitch of the buffer. The row pitch is the number /// of bytes between the start of one row and the start of the next row. The row pitch must be /// at least as large as the width of the rectangle in bytes. fn read_rect( &mut self, x: u32, y: u32, width: u32, height: u32, buf: &mut [u8], row_pitch: usize, ) -> ImageResult<()>; } /// `AnimationDecoder` trait pub trait AnimationDecoder<'a> { /// Consume the decoder producing a series of frames. fn into_frames(self) -> Frames<'a>; } /// The trait all encoders implement pub trait ImageEncoder { /// Writes all the bytes in an image to the encoder. /// /// This function takes a slice of bytes of the pixel data of the image /// and encodes them. Unlike particular format encoders inherent impl encode /// methods where endianness is not specified, here image data bytes should /// always be in native endian. The implementor will reorder the endianness /// as necessary for the target encoding format. /// /// See also `ImageDecoder::read_image` which reads byte buffers into /// native endian. /// /// # Panics /// /// Panics if `width * height * color_type.bytes_per_pixel() != buf.len()`. fn write_image( self, buf: &[u8], width: u32, height: u32, color_type: ExtendedColorType, ) -> ImageResult<()>; /// Set the ICC profile to use for the image. /// /// This function is a no-op for formats that don't support ICC profiles. /// For formats that do support ICC profiles, the profile will be embedded /// in the image when it is saved. /// /// # Errors /// /// This function returns an error if the format does not support ICC profiles. fn set_icc_profile(&mut self, icc_profile: Vec) -> Result<(), UnsupportedError> { let _ = icc_profile; Err(UnsupportedError::from_format_and_kind( ImageFormatHint::Unknown, UnsupportedErrorKind::GenericFeature( "ICC profiles are not supported for this format".into(), ), )) } } /// Immutable pixel iterator #[derive(Debug)] pub struct Pixels<'a, I: ?Sized + 'a> { image: &'a I, x: u32, y: u32, width: u32, height: u32, } impl Iterator for Pixels<'_, I> { type Item = (u32, u32, I::Pixel); fn next(&mut self) -> Option<(u32, u32, I::Pixel)> { if self.x >= self.width { self.x = 0; self.y += 1; } if self.y >= self.height { None } else { let pixel = self.image.get_pixel(self.x, self.y); let p = (self.x, self.y, pixel); self.x += 1; Some(p) } } } impl Clone for Pixels<'_, I> { fn clone(&self) -> Self { Pixels { ..*self } } } /// Trait to inspect an image. /// /// ``` /// use image::{GenericImageView, Rgb, RgbImage}; /// /// let buffer = RgbImage::new(10, 10); /// let image: &dyn GenericImageView> = &buffer; /// ``` pub trait GenericImageView { /// The type of pixel. type Pixel: Pixel; /// The width and height of this image. fn dimensions(&self) -> (u32, u32); /// The width of this image. fn width(&self) -> u32 { let (w, _) = self.dimensions(); w } /// The height of this image. fn height(&self) -> u32 { let (_, h) = self.dimensions(); h } /// Returns true if this x, y coordinate is contained inside the image. fn in_bounds(&self, x: u32, y: u32) -> bool { let (width, height) = self.dimensions(); x < width && y < height } /// Returns the pixel located at (x, y). Indexed from top left. /// /// # Panics /// /// Panics if `(x, y)` is out of bounds. fn get_pixel(&self, x: u32, y: u32) -> Self::Pixel; /// Returns the pixel located at (x, y). Indexed from top left. /// /// This function can be implemented in a way that ignores bounds checking. /// # Safety /// /// The coordinates must be [`in_bounds`] of the image. /// /// [`in_bounds`]: #method.in_bounds unsafe fn unsafe_get_pixel(&self, x: u32, y: u32) -> Self::Pixel { self.get_pixel(x, y) } /// Returns an Iterator over the pixels of this image. /// The iterator yields the coordinates of each pixel /// along with their value fn pixels(&self) -> Pixels where Self: Sized, { let (width, height) = self.dimensions(); Pixels { image: self, x: 0, y: 0, width, height, } } /// Returns a subimage that is an immutable view into this image. /// You can use [`GenericImage::sub_image`] if you need a mutable view instead. /// The coordinates set the position of the top left corner of the view. fn view(&self, x: u32, y: u32, width: u32, height: u32) -> SubImage<&Self> where Self: Sized, { assert!(u64::from(x) + u64::from(width) <= u64::from(self.width())); assert!(u64::from(y) + u64::from(height) <= u64::from(self.height())); SubImage::new(self, x, y, width, height) } } /// A trait for manipulating images. pub trait GenericImage: GenericImageView { /// Gets a reference to the mutable pixel at location `(x, y)`. Indexed from top left. /// /// # Panics /// /// Panics if `(x, y)` is out of bounds. /// /// Panics for dynamic images (this method is deprecated and will be removed). /// /// ## Known issues /// /// This requires the buffer to contain a unique set of continuous channels in the exact order /// and byte representation that the pixel type requires. This is somewhat restrictive. /// /// TODO: Maybe use some kind of entry API? this would allow pixel type conversion on the fly /// while still doing only one array lookup: /// /// ```ignore /// let px = image.pixel_entry_at(x,y); /// px.set_from_rgba(rgba) /// ``` #[deprecated(since = "0.24.0", note = "Use `get_pixel` and `put_pixel` instead.")] fn get_pixel_mut(&mut self, x: u32, y: u32) -> &mut Self::Pixel; /// Put a pixel at location (x, y). Indexed from top left. /// /// # Panics /// /// Panics if `(x, y)` is out of bounds. fn put_pixel(&mut self, x: u32, y: u32, pixel: Self::Pixel); /// Puts a pixel at location (x, y). Indexed from top left. /// /// This function can be implemented in a way that ignores bounds checking. /// # Safety /// /// The coordinates must be [`in_bounds`] of the image. /// /// [`in_bounds`]: traits.GenericImageView.html#method.in_bounds unsafe fn unsafe_put_pixel(&mut self, x: u32, y: u32, pixel: Self::Pixel) { self.put_pixel(x, y, pixel); } /// Put a pixel at location (x, y), taking into account alpha channels #[deprecated( since = "0.24.0", note = "Use iterator `pixels_mut` to blend the pixels directly" )] fn blend_pixel(&mut self, x: u32, y: u32, pixel: Self::Pixel); /// Copies all of the pixels from another image into this image. /// /// The other image is copied with the top-left corner of the /// other image placed at (x, y). /// /// In order to copy only a piece of the other image, use [`GenericImageView::view`]. /// /// You can use [`FlatSamples`] to source pixels from an arbitrary regular raster of channel /// values, for example from a foreign interface or a fixed image. /// /// # Returns /// Returns an error if the image is too large to be copied at the given position /// /// [`GenericImageView::view`]: trait.GenericImageView.html#method.view /// [`FlatSamples`]: flat/struct.FlatSamples.html fn copy_from(&mut self, other: &O, x: u32, y: u32) -> ImageResult<()> where O: GenericImageView, { // Do bounds checking here so we can use the non-bounds-checking // functions to copy pixels. if self.width() < other.width() + x || self.height() < other.height() + y { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } for k in 0..other.height() { for i in 0..other.width() { let p = other.get_pixel(i, k); self.put_pixel(i + x, k + y, p); } } Ok(()) } /// Copies all of the pixels from one part of this image to another part of this image. /// /// The destination rectangle of the copy is specified with the top-left corner placed at (x, y). /// /// # Returns /// `true` if the copy was successful, `false` if the image could not /// be copied due to size constraints. fn copy_within(&mut self, source: Rect, x: u32, y: u32) -> bool { let Rect { x: sx, y: sy, width, height, } = source; let dx = x; let dy = y; assert!(sx < self.width() && dx < self.width()); assert!(sy < self.height() && dy < self.height()); if self.width() - dx.max(sx) < width || self.height() - dy.max(sy) < height { return false; } // since `.rev()` creates a new dype we would either have to go with dynamic dispatch for the ranges // or have quite a lot of code bloat. A macro gives us static dispatch with less visible bloat. macro_rules! copy_within_impl_ { ($xiter:expr, $yiter:expr) => { for y in $yiter { let sy = sy + y; let dy = dy + y; for x in $xiter { let sx = sx + x; let dx = dx + x; let pixel = self.get_pixel(sx, sy); self.put_pixel(dx, dy, pixel); } } }; } // check how target and source rectangles relate to each other so we dont overwrite data before we copied it. match (sx < dx, sy < dy) { (true, true) => copy_within_impl_!((0..width).rev(), (0..height).rev()), (true, false) => copy_within_impl_!((0..width).rev(), 0..height), (false, true) => copy_within_impl_!(0..width, (0..height).rev()), (false, false) => copy_within_impl_!(0..width, 0..height), } true } /// Returns a mutable subimage that is a view into this image. /// If you want an immutable subimage instead, use [`GenericImageView::view`] /// The coordinates set the position of the top left corner of the `SubImage`. fn sub_image(&mut self, x: u32, y: u32, width: u32, height: u32) -> SubImage<&mut Self> where Self: Sized, { assert!(u64::from(x) + u64::from(width) <= u64::from(self.width())); assert!(u64::from(y) + u64::from(height) <= u64::from(self.height())); SubImage::new(self, x, y, width, height) } } /// A View into another image /// /// Instances of this struct can be created using: /// - [`GenericImage::sub_image`] to create a mutable view, /// - [`GenericImageView::view`] to create an immutable view, /// - [`SubImage::new`] to instantiate the struct directly. /// /// Note that this does _not_ implement `GenericImage`, but it dereferences to one which allows you /// to use it as if it did. See [Design Considerations](#Design-Considerations) below for details. /// /// # Design Considerations /// /// For reasons relating to coherence, this is not itself a `GenericImage` or a `GenericImageView`. /// In short, we want to reserve the ability of adding traits implemented for _all_ generic images /// but in a different manner for `SubImage`. This may be required to ensure that stacking /// sub-images comes at no double indirect cost. /// /// If, ultimately, this is not needed then a directly implementation of `GenericImage` can and /// will get added. This inconvenience may alternatively get resolved if Rust allows some forms of /// specialization, which might make this trick unnecessary and thus also allows for a direct /// implementation. #[derive(Copy, Clone)] pub struct SubImage { inner: SubImageInner, } /// The inner type of `SubImage` that implements `GenericImage{,View}`. /// /// This type is _nominally_ `pub` but it is not exported from the crate. It should be regarded as /// an existential type in any case. #[derive(Copy, Clone)] pub struct SubImageInner { image: I, xoffset: u32, yoffset: u32, xstride: u32, ystride: u32, } /// Alias to access Pixel behind a reference type DerefPixel = <::Target as GenericImageView>::Pixel; /// Alias to access Subpixel behind a reference type DerefSubpixel = as Pixel>::Subpixel; impl SubImage { /// Construct a new subimage /// The coordinates set the position of the top left corner of the `SubImage`. pub fn new(image: I, x: u32, y: u32, width: u32, height: u32) -> SubImage { SubImage { inner: SubImageInner { image, xoffset: x, yoffset: y, xstride: width, ystride: height, }, } } /// Change the coordinates of this subimage. pub fn change_bounds(&mut self, x: u32, y: u32, width: u32, height: u32) { self.inner.xoffset = x; self.inner.yoffset = y; self.inner.xstride = width; self.inner.ystride = height; } /// The offsets of this subimage relative to the underlying image. pub fn offsets(&self) -> (u32, u32) { (self.inner.xoffset, self.inner.yoffset) } /// Convert this subimage to an `ImageBuffer` pub fn to_image(&self) -> ImageBuffer, Vec>> where I: Deref, I::Target: GenericImageView + 'static, { let mut out = ImageBuffer::new(self.inner.xstride, self.inner.ystride); let borrowed = &*self.inner.image; for y in 0..self.inner.ystride { for x in 0..self.inner.xstride { let p = borrowed.get_pixel(x + self.inner.xoffset, y + self.inner.yoffset); out.put_pixel(x, y, p); } } out } } /// Methods for readable images. impl SubImage where I: Deref, I::Target: GenericImageView, { /// Create a sub-view of the image. /// /// The coordinates given are relative to the current view on the underlying image. /// /// Note that this method is preferred to the one from `GenericImageView`. This is accessible /// with the explicit method call syntax but it should rarely be needed due to causing an /// extra level of indirection. /// /// ``` /// use image::{GenericImageView, RgbImage, SubImage}; /// let buffer = RgbImage::new(10, 10); /// /// let subimage: SubImage<&RgbImage> = buffer.view(0, 0, 10, 10); /// let subview: SubImage<&RgbImage> = subimage.view(0, 0, 10, 10); /// /// // Less efficient and NOT &RgbImage /// let _: SubImage<&_> = GenericImageView::view(&*subimage, 0, 0, 10, 10); /// ``` pub fn view(&self, x: u32, y: u32, width: u32, height: u32) -> SubImage<&I::Target> { use crate::GenericImageView as _; assert!(u64::from(x) + u64::from(width) <= u64::from(self.inner.width())); assert!(u64::from(y) + u64::from(height) <= u64::from(self.inner.height())); let x = self.inner.xoffset.saturating_add(x); let y = self.inner.yoffset.saturating_add(y); SubImage::new(&*self.inner.image, x, y, width, height) } /// Get a reference to the underlying image. pub fn inner(&self) -> &I::Target { &self.inner.image } } impl SubImage where I: DerefMut, I::Target: GenericImage, { /// Create a mutable sub-view of the image. /// /// The coordinates given are relative to the current view on the underlying image. pub fn sub_image( &mut self, x: u32, y: u32, width: u32, height: u32, ) -> SubImage<&mut I::Target> { assert!(u64::from(x) + u64::from(width) <= u64::from(self.inner.width())); assert!(u64::from(y) + u64::from(height) <= u64::from(self.inner.height())); let x = self.inner.xoffset.saturating_add(x); let y = self.inner.yoffset.saturating_add(y); SubImage::new(&mut *self.inner.image, x, y, width, height) } /// Get a mutable reference to the underlying image. pub fn inner_mut(&mut self) -> &mut I::Target { &mut self.inner.image } } impl Deref for SubImage where I: Deref, { type Target = SubImageInner; fn deref(&self) -> &Self::Target { &self.inner } } impl DerefMut for SubImage where I: DerefMut, { fn deref_mut(&mut self) -> &mut Self::Target { &mut self.inner } } #[allow(deprecated)] impl GenericImageView for SubImageInner where I: Deref, I::Target: GenericImageView, { type Pixel = DerefPixel; fn dimensions(&self) -> (u32, u32) { (self.xstride, self.ystride) } fn get_pixel(&self, x: u32, y: u32) -> Self::Pixel { self.image.get_pixel(x + self.xoffset, y + self.yoffset) } } #[allow(deprecated)] impl GenericImage for SubImageInner where I: DerefMut, I::Target: GenericImage + Sized, { fn get_pixel_mut(&mut self, x: u32, y: u32) -> &mut Self::Pixel { self.image.get_pixel_mut(x + self.xoffset, y + self.yoffset) } fn put_pixel(&mut self, x: u32, y: u32, pixel: Self::Pixel) { self.image .put_pixel(x + self.xoffset, y + self.yoffset, pixel); } /// DEPRECATED: This method will be removed. Blend the pixel directly instead. fn blend_pixel(&mut self, x: u32, y: u32, pixel: Self::Pixel) { self.image .blend_pixel(x + self.xoffset, y + self.yoffset, pixel); } } #[cfg(test)] mod tests { use std::collections::HashSet; use std::io; use std::path::Path; use super::{ load_rect, ColorType, GenericImage, GenericImageView, ImageDecoder, ImageFormat, ImageResult, }; use crate::color::Rgba; use crate::math::Rect; use crate::{GrayImage, ImageBuffer}; #[test] #[allow(deprecated)] /// Test that alpha blending works as expected fn test_image_alpha_blending() { let mut target = ImageBuffer::new(1, 1); target.put_pixel(0, 0, Rgba([255u8, 0, 0, 255])); assert!(*target.get_pixel(0, 0) == Rgba([255, 0, 0, 255])); target.blend_pixel(0, 0, Rgba([0, 255, 0, 255])); assert!(*target.get_pixel(0, 0) == Rgba([0, 255, 0, 255])); // Blending an alpha channel onto a solid background target.blend_pixel(0, 0, Rgba([255, 0, 0, 127])); assert!(*target.get_pixel(0, 0) == Rgba([127, 127, 0, 255])); // Blending two alpha channels target.put_pixel(0, 0, Rgba([0, 255, 0, 127])); target.blend_pixel(0, 0, Rgba([255, 0, 0, 127])); assert!(*target.get_pixel(0, 0) == Rgba([169, 85, 0, 190])); } #[test] fn test_in_bounds() { let mut target = ImageBuffer::new(2, 2); target.put_pixel(0, 0, Rgba([255u8, 0, 0, 255])); assert!(target.in_bounds(0, 0)); assert!(target.in_bounds(1, 0)); assert!(target.in_bounds(0, 1)); assert!(target.in_bounds(1, 1)); assert!(!target.in_bounds(2, 0)); assert!(!target.in_bounds(0, 2)); assert!(!target.in_bounds(2, 2)); } #[test] fn test_can_subimage_clone_nonmut() { let mut source = ImageBuffer::new(3, 3); source.put_pixel(1, 1, Rgba([255u8, 0, 0, 255])); // A non-mutable copy of the source image let source = source.clone(); // Clone a view into non-mutable to a separate buffer let cloned = source.view(1, 1, 1, 1).to_image(); assert!(cloned.get_pixel(0, 0) == source.get_pixel(1, 1)); } #[test] fn test_can_nest_views() { let mut source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); { let mut sub1 = source.sub_image(0, 0, 2, 2); let mut sub2 = sub1.sub_image(1, 1, 1, 1); sub2.put_pixel(0, 0, Rgba([0, 0, 0, 0])); } assert_eq!(*source.get_pixel(1, 1), Rgba([0, 0, 0, 0])); let view1 = source.view(0, 0, 2, 2); assert_eq!(*source.get_pixel(1, 1), view1.get_pixel(1, 1)); let view2 = view1.view(1, 1, 1, 1); assert_eq!(*source.get_pixel(1, 1), view2.get_pixel(0, 0)); } #[test] #[should_panic] fn test_view_out_of_bounds() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); source.view(1, 1, 3, 3); } #[test] #[should_panic] fn test_view_coordinates_out_of_bounds() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); source.view(3, 3, 3, 3); } #[test] #[should_panic] fn test_view_width_out_of_bounds() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); source.view(1, 1, 3, 2); } #[test] #[should_panic] fn test_view_height_out_of_bounds() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); source.view(1, 1, 2, 3); } #[test] #[should_panic] fn test_view_x_out_of_bounds() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); source.view(3, 1, 3, 3); } #[test] #[should_panic] fn test_view_y_out_of_bounds() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); source.view(1, 3, 3, 3); } #[test] fn test_view_in_bounds() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); source.view(0, 0, 3, 3); source.view(1, 1, 2, 2); source.view(2, 2, 0, 0); } #[test] fn test_copy_sub_image() { let source = ImageBuffer::from_pixel(3, 3, Rgba([255u8, 0, 0, 255])); let view = source.view(0, 0, 3, 3); let _view2 = view; view.to_image(); } #[test] fn test_load_rect() { struct MockDecoder { scanline_number: u64, scanline_bytes: u64, } impl ImageDecoder for MockDecoder { fn dimensions(&self) -> (u32, u32) { (5, 5) } fn color_type(&self) -> ColorType { ColorType::L8 } fn read_image(self, _buf: &mut [u8]) -> ImageResult<()> { unimplemented!() } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } const DATA: [u8; 25] = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, ]; fn seek_scanline(m: &mut MockDecoder, n: u64) -> io::Result<()> { m.scanline_number = n; Ok(()) } fn read_scanline(m: &mut MockDecoder, buf: &mut [u8]) -> io::Result<()> { let bytes_read = m.scanline_number * m.scanline_bytes; if bytes_read >= 25 { return Ok(()); } let len = m.scanline_bytes.min(25 - bytes_read); buf[..(len as usize)].copy_from_slice(&DATA[(bytes_read as usize)..][..(len as usize)]); m.scanline_number += 1; Ok(()) } for scanline_bytes in 1..30 { let mut output = [0u8; 26]; load_rect( 0, 0, 5, 5, &mut output, 5, &mut MockDecoder { scanline_number: 0, scanline_bytes, }, scanline_bytes as usize, seek_scanline, read_scanline, ) .unwrap(); assert_eq!(output[0..25], DATA); assert_eq!(output[25], 0); output = [0u8; 26]; load_rect( 3, 2, 1, 1, &mut output, 1, &mut MockDecoder { scanline_number: 0, scanline_bytes, }, scanline_bytes as usize, seek_scanline, read_scanline, ) .unwrap(); assert_eq!(output[0..2], [13, 0]); output = [0u8; 26]; load_rect( 3, 2, 2, 2, &mut output, 2, &mut MockDecoder { scanline_number: 0, scanline_bytes, }, scanline_bytes as usize, seek_scanline, read_scanline, ) .unwrap(); assert_eq!(output[0..5], [13, 14, 18, 19, 0]); output = [0u8; 26]; load_rect( 1, 1, 2, 4, &mut output, 2, &mut MockDecoder { scanline_number: 0, scanline_bytes, }, scanline_bytes as usize, seek_scanline, read_scanline, ) .unwrap(); assert_eq!(output[0..9], [6, 7, 11, 12, 16, 17, 21, 22, 0]); } } #[test] fn test_load_rect_single_scanline() { const DATA: [u8; 25] = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, ]; struct MockDecoder; impl ImageDecoder for MockDecoder { fn dimensions(&self) -> (u32, u32) { (5, 5) } fn color_type(&self) -> ColorType { ColorType::L8 } fn read_image(self, _buf: &mut [u8]) -> ImageResult<()> { unimplemented!() } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } // Ensure that seek scanline is called only once. let mut seeks = 0; let seek_scanline = |_d: &mut MockDecoder, n: u64| -> io::Result<()> { seeks += 1; assert_eq!(n, 0); assert_eq!(seeks, 1); Ok(()) }; fn read_scanline(_m: &mut MockDecoder, buf: &mut [u8]) -> io::Result<()> { buf.copy_from_slice(&DATA); Ok(()) } let mut output = [0; 26]; load_rect( 1, 1, 2, 4, &mut output, 2, &mut MockDecoder, DATA.len(), seek_scanline, read_scanline, ) .unwrap(); assert_eq!(output[0..9], [6, 7, 11, 12, 16, 17, 21, 22, 0]); } #[test] fn test_image_format_from_path() { fn from_path(s: &str) -> ImageResult { ImageFormat::from_path(Path::new(s)) } assert_eq!(from_path("./a.jpg").unwrap(), ImageFormat::Jpeg); assert_eq!(from_path("./a.jpeg").unwrap(), ImageFormat::Jpeg); assert_eq!(from_path("./a.JPEG").unwrap(), ImageFormat::Jpeg); assert_eq!(from_path("./a.pNg").unwrap(), ImageFormat::Png); assert_eq!(from_path("./a.gif").unwrap(), ImageFormat::Gif); assert_eq!(from_path("./a.webp").unwrap(), ImageFormat::WebP); assert_eq!(from_path("./a.tiFF").unwrap(), ImageFormat::Tiff); assert_eq!(from_path("./a.tif").unwrap(), ImageFormat::Tiff); assert_eq!(from_path("./a.tga").unwrap(), ImageFormat::Tga); assert_eq!(from_path("./a.dds").unwrap(), ImageFormat::Dds); assert_eq!(from_path("./a.bmp").unwrap(), ImageFormat::Bmp); assert_eq!(from_path("./a.Ico").unwrap(), ImageFormat::Ico); assert_eq!(from_path("./a.hdr").unwrap(), ImageFormat::Hdr); assert_eq!(from_path("./a.exr").unwrap(), ImageFormat::OpenExr); assert_eq!(from_path("./a.pbm").unwrap(), ImageFormat::Pnm); assert_eq!(from_path("./a.pAM").unwrap(), ImageFormat::Pnm); assert_eq!(from_path("./a.Ppm").unwrap(), ImageFormat::Pnm); assert_eq!(from_path("./a.pgm").unwrap(), ImageFormat::Pnm); assert_eq!(from_path("./a.AViF").unwrap(), ImageFormat::Avif); assert_eq!(from_path("./a.PCX").unwrap(), ImageFormat::Pcx); assert!(from_path("./a.txt").is_err()); assert!(from_path("./a").is_err()); } #[test] fn test_generic_image_copy_within_oob() { let mut image: GrayImage = ImageBuffer::from_raw(4, 4, vec![0u8; 16]).unwrap(); assert!(!image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 0, y: 0, width: 5, height: 4 }, 0, 0 )); assert!(!image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 0, y: 0, width: 4, height: 5 }, 0, 0 )); assert!(!image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 1, y: 0, width: 4, height: 4 }, 0, 0 )); assert!(!image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 0, y: 0, width: 4, height: 4 }, 1, 0 )); assert!(!image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 0, y: 1, width: 4, height: 4 }, 0, 0 )); assert!(!image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 0, y: 0, width: 4, height: 4 }, 0, 1 )); assert!(!image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 1, y: 1, width: 4, height: 4 }, 0, 0 )); } #[test] fn test_generic_image_copy_within_tl() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [0, 1, 2, 3, 4, 0, 1, 2, 8, 4, 5, 6, 12, 8, 9, 10]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 0, y: 0, width: 3, height: 3 }, 1, 1 )); assert_eq!(&image.into_raw(), &expected); } #[test] fn test_generic_image_copy_within_tr() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [0, 1, 2, 3, 1, 2, 3, 7, 5, 6, 7, 11, 9, 10, 11, 15]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 1, y: 0, width: 3, height: 3 }, 0, 1 )); assert_eq!(&image.into_raw(), &expected); } #[test] fn test_generic_image_copy_within_bl() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [0, 4, 5, 6, 4, 8, 9, 10, 8, 12, 13, 14, 12, 13, 14, 15]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 0, y: 1, width: 3, height: 3 }, 1, 0 )); assert_eq!(&image.into_raw(), &expected); } #[test] fn test_generic_image_copy_within_br() { let data = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]; let expected = [5, 6, 7, 3, 9, 10, 11, 7, 13, 14, 15, 11, 12, 13, 14, 15]; let mut image: GrayImage = ImageBuffer::from_raw(4, 4, Vec::from(&data[..])).unwrap(); assert!(image.sub_image(0, 0, 4, 4).copy_within( Rect { x: 1, y: 1, width: 3, height: 3 }, 0, 0 )); assert_eq!(&image.into_raw(), &expected); } #[test] fn image_formats_are_recognized() { use ImageFormat::*; const ALL_FORMATS: &[ImageFormat] = &[ Avif, Png, Jpeg, Gif, WebP, Pnm, Tiff, Tga, Dds, Bmp, Ico, Hdr, Farbfeld, OpenExr, Pcx, ]; for &format in ALL_FORMATS { let mut file = Path::new("file.nothing").to_owned(); for ext in format.extensions_str() { assert!(file.set_extension(ext)); match ImageFormat::from_path(&file) { Err(_) => panic!("Path {} not recognized as {:?}", file.display(), format), Ok(result) => assert_eq!(format, result), } } } } #[test] fn total_bytes_overflow() { struct D; impl ImageDecoder for D { fn color_type(&self) -> ColorType { ColorType::Rgb8 } fn dimensions(&self) -> (u32, u32) { (0xffff_ffff, 0xffff_ffff) } fn read_image(self, _buf: &mut [u8]) -> ImageResult<()> { unimplemented!() } fn read_image_boxed(self: Box, buf: &mut [u8]) -> ImageResult<()> { (*self).read_image(buf) } } assert_eq!(D.total_bytes(), u64::MAX); let v: ImageResult> = super::decoder_to_vec(D); assert!(v.is_err()); } #[test] fn all() { let all_formats: HashSet = ImageFormat::all().collect(); assert!(all_formats.contains(&ImageFormat::Avif)); assert!(all_formats.contains(&ImageFormat::Gif)); assert!(all_formats.contains(&ImageFormat::Bmp)); assert!(all_formats.contains(&ImageFormat::Farbfeld)); assert!(all_formats.contains(&ImageFormat::Jpeg)); } #[test] fn reading_enabled() { assert_eq!(cfg!(feature = "jpeg"), ImageFormat::Jpeg.reading_enabled()); assert_eq!( cfg!(feature = "ff"), ImageFormat::Farbfeld.reading_enabled() ); assert!(!ImageFormat::Dds.reading_enabled()); } #[test] fn writing_enabled() { assert_eq!(cfg!(feature = "jpeg"), ImageFormat::Jpeg.writing_enabled()); assert_eq!( cfg!(feature = "ff"), ImageFormat::Farbfeld.writing_enabled() ); assert!(!ImageFormat::Dds.writing_enabled()); } } image-0.25.5/src/image_reader/free_functions.rs000064400000000000000000000142101046102023000175470ustar 00000000000000use std::fs::File; use std::io::{BufRead, BufWriter, Seek}; use std::path::Path; use crate::{codecs::*, ExtendedColorType, ImageReader}; use crate::dynimage::DynamicImage; use crate::error::{ImageError, ImageFormatHint, ImageResult}; use crate::error::{UnsupportedError, UnsupportedErrorKind}; use crate::image::ImageFormat; #[allow(unused_imports)] // When no features are supported use crate::image::{ImageDecoder, ImageEncoder}; /// Create a new image from a Reader. /// /// Assumes the reader is already buffered. For optimal performance, /// consider wrapping the reader with a `BufReader::new()`. /// /// Try [`ImageReader`] for more advanced uses. pub fn load(r: R, format: ImageFormat) -> ImageResult { let mut reader = ImageReader::new(r); reader.set_format(format); reader.decode() } #[allow(unused_variables)] // Most variables when no features are supported pub(crate) fn save_buffer_impl( path: &Path, buf: &[u8], width: u32, height: u32, color: ExtendedColorType, ) -> ImageResult<()> { let format = ImageFormat::from_path(path)?; save_buffer_with_format_impl(path, buf, width, height, color, format) } #[allow(unused_variables)] // Most variables when no features are supported pub(crate) fn save_buffer_with_format_impl( path: &Path, buf: &[u8], width: u32, height: u32, color: ExtendedColorType, format: ImageFormat, ) -> ImageResult<()> { let buffered_file_write = &mut BufWriter::new(File::create(path)?); // always seekable write_buffer_impl(buffered_file_write, buf, width, height, color, format) } #[allow(unused_variables)] // Most variables when no features are supported pub(crate) fn write_buffer_impl( buffered_write: &mut W, buf: &[u8], width: u32, height: u32, color: ExtendedColorType, format: ImageFormat, ) -> ImageResult<()> { match format { #[cfg(feature = "png")] ImageFormat::Png => { png::PngEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "jpeg")] ImageFormat::Jpeg => { jpeg::JpegEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "pnm")] ImageFormat::Pnm => { pnm::PnmEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "gif")] ImageFormat::Gif => gif::GifEncoder::new(buffered_write).encode(buf, width, height, color), #[cfg(feature = "ico")] ImageFormat::Ico => { ico::IcoEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "bmp")] ImageFormat::Bmp => { bmp::BmpEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "ff")] ImageFormat::Farbfeld => { farbfeld::FarbfeldEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "tga")] ImageFormat::Tga => { tga::TgaEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "exr")] ImageFormat::OpenExr => { openexr::OpenExrEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "tiff")] ImageFormat::Tiff => { tiff::TiffEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "avif")] ImageFormat::Avif => { avif::AvifEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "qoi")] ImageFormat::Qoi => { qoi::QoiEncoder::new(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "webp")] ImageFormat::WebP => { webp::WebPEncoder::new_lossless(buffered_write).write_image(buf, width, height, color) } #[cfg(feature = "hdr")] ImageFormat::Hdr => { hdr::HdrEncoder::new(buffered_write).write_image(buf, width, height, color) } _ => Err(ImageError::Unsupported( UnsupportedError::from_format_and_kind( ImageFormatHint::Unknown, UnsupportedErrorKind::Format(ImageFormatHint::Name(format!("{format:?}"))), ), )), } } static MAGIC_BYTES: [(&[u8], ImageFormat); 25] = [ (b"\x89PNG\r\n\x1a\n", ImageFormat::Png), (&[0xff, 0xd8, 0xff], ImageFormat::Jpeg), (b"GIF89a", ImageFormat::Gif), (b"GIF87a", ImageFormat::Gif), (b"RIFF", ImageFormat::WebP), // TODO: better magic byte detection, see https://github.com/image-rs/image/issues/660 (b"MM\x00*", ImageFormat::Tiff), (b"II*\x00", ImageFormat::Tiff), (b"DDS ", ImageFormat::Dds), (b"BM", ImageFormat::Bmp), (&[0, 0, 1, 0], ImageFormat::Ico), (b"#?RADIANCE", ImageFormat::Hdr), (b"P1", ImageFormat::Pnm), (b"P2", ImageFormat::Pnm), (b"P3", ImageFormat::Pnm), (b"P4", ImageFormat::Pnm), (b"P5", ImageFormat::Pnm), (b"P6", ImageFormat::Pnm), (b"P7", ImageFormat::Pnm), (b"farbfeld", ImageFormat::Farbfeld), (b"\0\0\0 ftypavif", ImageFormat::Avif), (b"\0\0\0\x1cftypavif", ImageFormat::Avif), (&[0x76, 0x2f, 0x31, 0x01], ImageFormat::OpenExr), // = &exr::meta::magic_number::BYTES (b"qoif", ImageFormat::Qoi), (&[0x0a, 0x02], ImageFormat::Pcx), (&[0x0a, 0x05], ImageFormat::Pcx), ]; /// Guess image format from memory block /// /// Makes an educated guess about the image format based on the Magic Bytes at the beginning. /// TGA is not supported by this function. /// This is not to be trusted on the validity of the whole memory block pub fn guess_format(buffer: &[u8]) -> ImageResult { match guess_format_impl(buffer) { Some(format) => Ok(format), None => Err(ImageError::Unsupported(ImageFormatHint::Unknown.into())), } } pub(crate) fn guess_format_impl(buffer: &[u8]) -> Option { for &(signature, format) in &MAGIC_BYTES { if buffer.starts_with(signature) { return Some(format); } } None } image-0.25.5/src/image_reader/image_reader_type.rs000064400000000000000000000265641046102023000202220ustar 00000000000000use std::fs::File; use std::io::{self, BufRead, BufReader, Cursor, Read, Seek, SeekFrom}; use std::path::Path; use crate::dynimage::DynamicImage; use crate::error::{ImageFormatHint, UnsupportedError, UnsupportedErrorKind}; use crate::image::ImageFormat; use crate::{ImageDecoder, ImageError, ImageResult}; use super::free_functions; /// A multi-format image reader. /// /// Wraps an input reader to facilitate automatic detection of an image's format, appropriate /// decoding method, and dispatches into the set of supported [`ImageDecoder`] implementations. /// /// ## Usage /// /// Opening a file, deducing the format based on the file path automatically, and trying to decode /// the image contained can be performed by constructing the reader and immediately consuming it. /// /// ```no_run /// # use image::ImageError; /// # use image::ImageReader; /// # fn main() -> Result<(), ImageError> { /// let image = ImageReader::open("path/to/image.png")? /// .decode()?; /// # Ok(()) } /// ``` /// /// It is also possible to make a guess based on the content. This is especially handy if the /// source is some blob in memory and you have constructed the reader in another way. Here is an /// example with a `pnm` black-and-white subformat that encodes its pixel matrix with ascii values. /// /// ``` /// # use image::ImageError; /// # use image::ImageReader; /// # fn main() -> Result<(), ImageError> { /// use std::io::Cursor; /// use image::ImageFormat; /// /// let raw_data = b"P1 2 2\n\ /// 0 1\n\ /// 1 0\n"; /// /// let mut reader = ImageReader::new(Cursor::new(raw_data)) /// .with_guessed_format() /// .expect("Cursor io never fails"); /// assert_eq!(reader.format(), Some(ImageFormat::Pnm)); /// /// # #[cfg(feature = "pnm")] /// let image = reader.decode()?; /// # Ok(()) } /// ``` /// /// As a final fallback or if only a specific format must be used, the reader always allows manual /// specification of the supposed image format with [`set_format`]. /// /// [`set_format`]: #method.set_format /// [`ImageDecoder`]: ../trait.ImageDecoder.html pub struct ImageReader { /// The reader. Should be buffered. inner: R, /// The format, if one has been set or deduced. format: Option, /// Decoding limits limits: super::Limits, } impl<'a, R: 'a + BufRead + Seek> ImageReader { /// Create a new image reader without a preset format. /// /// Assumes the reader is already buffered. For optimal performance, /// consider wrapping the reader with a `BufReader::new()`. /// /// It is possible to guess the format based on the content of the read object with /// [`with_guessed_format`], or to set the format directly with [`set_format`]. /// /// [`with_guessed_format`]: #method.with_guessed_format /// [`set_format`]: method.set_format pub fn new(buffered_reader: R) -> Self { ImageReader { inner: buffered_reader, format: None, limits: super::Limits::default(), } } /// Construct a reader with specified format. /// /// Assumes the reader is already buffered. For optimal performance, /// consider wrapping the reader with a `BufReader::new()`. pub fn with_format(buffered_reader: R, format: ImageFormat) -> Self { ImageReader { inner: buffered_reader, format: Some(format), limits: super::Limits::default(), } } /// Get the currently determined format. pub fn format(&self) -> Option { self.format } /// Supply the format as which to interpret the read image. pub fn set_format(&mut self, format: ImageFormat) { self.format = Some(format); } /// Remove the current information on the image format. /// /// Note that many operations require format information to be present and will return e.g. an /// `ImageError::Unsupported` when the image format has not been set. pub fn clear_format(&mut self) { self.format = None; } /// Disable all decoding limits. pub fn no_limits(&mut self) { self.limits = super::Limits::no_limits(); } /// Set a custom set of decoding limits. pub fn limits(&mut self, limits: super::Limits) { self.limits = limits; } /// Unwrap the reader. pub fn into_inner(self) -> R { self.inner } /// Makes a decoder. /// /// For all formats except PNG, the limits are ignored and can be set with /// `ImageDecoder::set_limits` after calling this function. PNG is handled specially because that /// decoder has a different API which does not allow setting limits after construction. fn make_decoder( format: ImageFormat, reader: R, limits_for_png: super::Limits, ) -> ImageResult> { #[allow(unused)] use crate::codecs::*; #[allow(unreachable_patterns)] // Default is unreachable if all features are supported. Ok(match format { #[cfg(feature = "avif-native")] ImageFormat::Avif => Box::new(avif::AvifDecoder::new(reader)?), #[cfg(feature = "png")] ImageFormat::Png => Box::new(png::PngDecoder::with_limits(reader, limits_for_png)?), #[cfg(feature = "gif")] ImageFormat::Gif => Box::new(gif::GifDecoder::new(reader)?), #[cfg(feature = "jpeg")] ImageFormat::Jpeg => Box::new(jpeg::JpegDecoder::new(reader)?), #[cfg(feature = "webp")] ImageFormat::WebP => Box::new(webp::WebPDecoder::new(reader)?), #[cfg(feature = "tiff")] ImageFormat::Tiff => Box::new(tiff::TiffDecoder::new(reader)?), #[cfg(feature = "tga")] ImageFormat::Tga => Box::new(tga::TgaDecoder::new(reader)?), #[cfg(feature = "dds")] ImageFormat::Dds => Box::new(dds::DdsDecoder::new(reader)?), #[cfg(feature = "bmp")] ImageFormat::Bmp => Box::new(bmp::BmpDecoder::new(reader)?), #[cfg(feature = "ico")] ImageFormat::Ico => Box::new(ico::IcoDecoder::new(reader)?), #[cfg(feature = "hdr")] ImageFormat::Hdr => Box::new(hdr::HdrDecoder::new(reader)?), #[cfg(feature = "exr")] ImageFormat::OpenExr => Box::new(openexr::OpenExrDecoder::new(reader)?), #[cfg(feature = "pnm")] ImageFormat::Pnm => Box::new(pnm::PnmDecoder::new(reader)?), #[cfg(feature = "ff")] ImageFormat::Farbfeld => Box::new(farbfeld::FarbfeldDecoder::new(reader)?), #[cfg(feature = "qoi")] ImageFormat::Qoi => Box::new(qoi::QoiDecoder::new(reader)?), #[cfg(feature = "pcx")] ImageFormat::Pcx => Box::new(pcx::PCXDecoder::new(reader)?), format => { return Err(ImageError::Unsupported( ImageFormatHint::Exact(format).into(), )) } }) } /// Convert the reader into a decoder. pub fn into_decoder(mut self) -> ImageResult { let mut decoder = Self::make_decoder(self.require_format()?, self.inner, self.limits.clone())?; decoder.set_limits(self.limits)?; Ok(decoder) } /// Make a format guess based on the content, replacing it on success. /// /// Returns `Ok` with the guess if no io error occurs. Additionally, replaces the current /// format if the guess was successful. If the guess was unable to determine a format then /// the current format of the reader is unchanged. /// /// Returns an error if the underlying reader fails. The format is unchanged. The error is a /// `std::io::Error` and not `ImageError` since the only error case is an error when the /// underlying reader seeks. /// /// When an error occurs, the reader may not have been properly reset and it is potentially /// hazardous to continue with more io. /// /// ## Usage /// /// This supplements the path based type deduction from [`ImageReader::open()`] with content based deduction. /// This is more common in Linux and UNIX operating systems and also helpful if the path can /// not be directly controlled. /// /// ```no_run /// # use image::ImageError; /// # use image::ImageReader; /// # fn main() -> Result<(), ImageError> { /// let image = ImageReader::open("image.unknown")? /// .with_guessed_format()? /// .decode()?; /// # Ok(()) } /// ``` pub fn with_guessed_format(mut self) -> io::Result { let format = self.guess_format()?; // Replace format if found, keep current state if not. self.format = format.or(self.format); Ok(self) } fn guess_format(&mut self) -> io::Result> { let mut start = [0; 16]; // Save current offset, read start, restore offset. let cur = self.inner.stream_position()?; let len = io::copy( // Accept shorter files but read at most 16 bytes. &mut self.inner.by_ref().take(16), &mut Cursor::new(&mut start[..]), )?; self.inner.seek(SeekFrom::Start(cur))?; Ok(free_functions::guess_format_impl(&start[..len as usize])) } /// Read the image dimensions. /// /// Uses the current format to construct the correct reader for the format. /// /// If no format was determined, returns an `ImageError::Unsupported`. pub fn into_dimensions(self) -> ImageResult<(u32, u32)> { self.into_decoder().map(|d| d.dimensions()) } /// Read the image (replaces `load`). /// /// Uses the current format to construct the correct reader for the format. /// /// If no format was determined, returns an `ImageError::Unsupported`. pub fn decode(mut self) -> ImageResult { let format = self.require_format()?; let mut limits = self.limits; let mut decoder = Self::make_decoder(format, self.inner, limits.clone())?; // Check that we do not allocate a bigger buffer than we are allowed to // FIXME: should this rather go in `DynamicImage::from_decoder` somehow? limits.reserve(decoder.total_bytes())?; decoder.set_limits(limits)?; DynamicImage::from_decoder(decoder) } fn require_format(&mut self) -> ImageResult { self.format.ok_or_else(|| { ImageError::Unsupported(UnsupportedError::from_format_and_kind( ImageFormatHint::Unknown, UnsupportedErrorKind::Format(ImageFormatHint::Unknown), )) }) } } impl ImageReader> { /// Open a file to read, format will be guessed from path. /// /// This will not attempt any io operation on the opened file. /// /// If you want to inspect the content for a better guess on the format, which does not depend /// on file extensions, follow this call with a call to [`with_guessed_format`]. /// /// [`with_guessed_format`]: #method.with_guessed_format pub fn open

(path: P) -> io::Result where P: AsRef, { Self::open_impl(path.as_ref()) } fn open_impl(path: &Path) -> io::Result { Ok(ImageReader { inner: BufReader::new(File::open(path)?), format: ImageFormat::from_path(path).ok(), limits: super::Limits::default(), }) } } image-0.25.5/src/image_reader/mod.rs000064400000000000000000000145151046102023000153250ustar 00000000000000//! Input and output of images. use crate::{error, ColorType, ImageError, ImageResult}; pub(crate) mod free_functions; mod image_reader_type; pub use self::image_reader_type::ImageReader; /// Set of supported strict limits for a decoder. #[derive(Clone, Debug, Default, Eq, PartialEq, Hash)] #[allow(missing_copy_implementations)] #[non_exhaustive] pub struct LimitSupport {} /// Resource limits for decoding. /// /// Limits can be either *strict* or *non-strict*. Non-strict limits are best-effort /// limits where the library does not guarantee that limit will not be exceeded. Do note /// that it is still considered a bug if a non-strict limit is exceeded. /// Some of the underlying decoders do not support such limits, so one cannot /// rely on these limits being supported. For strict limits, the library makes a stronger /// guarantee that the limit will not be exceeded. Exceeding a strict limit is considered /// a critical bug. If a decoder cannot guarantee that it will uphold a strict limit, it /// *must* fail with [`error::LimitErrorKind::Unsupported`]. /// /// The only currently supported strict limits are the `max_image_width` and `max_image_height` /// limits, but more will be added in the future. [`LimitSupport`] will default to support /// being false, and decoders should enable support for the limits they support in /// [`ImageDecoder::set_limits`]. /// /// The limit check should only ever fail if a limit will be exceeded or an unsupported strict /// limit is used. /// /// [`LimitSupport`]: ./struct.LimitSupport.html /// [`ImageDecoder::set_limits`]: ../trait.ImageDecoder.html#method.set_limits #[derive(Clone, Debug, Eq, PartialEq, Hash)] #[allow(missing_copy_implementations)] #[non_exhaustive] pub struct Limits { /// The maximum allowed image width. This limit is strict. The default is no limit. pub max_image_width: Option, /// The maximum allowed image height. This limit is strict. The default is no limit. pub max_image_height: Option, /// The maximum allowed sum of allocations allocated by the decoder at any one time excluding /// allocator overhead. This limit is non-strict by default and some decoders may ignore it. /// The bytes required to store the output image count towards this value. The default is /// 512MiB. pub max_alloc: Option, } impl Default for Limits { fn default() -> Limits { Limits { max_image_width: None, max_image_height: None, max_alloc: Some(512 * 1024 * 1024), } } } impl Limits { /// Disable all limits. #[must_use] pub fn no_limits() -> Limits { Limits { max_image_width: None, max_image_height: None, max_alloc: None, } } /// This function checks that all currently set strict limits are supported. pub fn check_support(&self, _supported: &LimitSupport) -> ImageResult<()> { Ok(()) } /// This function checks the `max_image_width` and `max_image_height` limits given /// the image width and height. pub fn check_dimensions(&self, width: u32, height: u32) -> ImageResult<()> { if let Some(max_width) = self.max_image_width { if width > max_width { return Err(ImageError::Limits(error::LimitError::from_kind( error::LimitErrorKind::DimensionError, ))); } } if let Some(max_height) = self.max_image_height { if height > max_height { return Err(ImageError::Limits(error::LimitError::from_kind( error::LimitErrorKind::DimensionError, ))); } } Ok(()) } /// This function checks that the current limit allows for reserving the set amount /// of bytes, it then reduces the limit accordingly. pub fn reserve(&mut self, amount: u64) -> ImageResult<()> { if let Some(max_alloc) = self.max_alloc.as_mut() { if *max_alloc < amount { return Err(ImageError::Limits(error::LimitError::from_kind( error::LimitErrorKind::InsufficientMemory, ))); } *max_alloc -= amount; } Ok(()) } /// This function acts identically to [`reserve`], but takes a `usize` for convenience. /// /// [`reserve`]: #method.reserve pub fn reserve_usize(&mut self, amount: usize) -> ImageResult<()> { match u64::try_from(amount) { Ok(n) => self.reserve(n), Err(_) if self.max_alloc.is_some() => Err(ImageError::Limits( error::LimitError::from_kind(error::LimitErrorKind::InsufficientMemory), )), Err(_) => { // Out of bounds, but we weren't asked to consider any limit. Ok(()) } } } /// This function acts identically to [`reserve`], but accepts the width, height and color type /// used to create an [`ImageBuffer`] and does all the math for you. /// /// [`ImageBuffer`]: crate::ImageBuffer /// [`reserve`]: #method.reserve pub fn reserve_buffer( &mut self, width: u32, height: u32, color_type: ColorType, ) -> ImageResult<()> { self.check_dimensions(width, height)?; let in_memory_size = u64::from(width) .saturating_mul(u64::from(height)) .saturating_mul(color_type.bytes_per_pixel().into()); self.reserve(in_memory_size)?; Ok(()) } /// This function increases the `max_alloc` limit with amount. Should only be used /// together with [`reserve`]. /// /// [`reserve`]: #method.reserve pub fn free(&mut self, amount: u64) { if let Some(max_alloc) = self.max_alloc.as_mut() { *max_alloc = max_alloc.saturating_add(amount); } } /// This function acts identically to [`free`], but takes a `usize` for convenience. /// /// [`free`]: #method.free pub fn free_usize(&mut self, amount: usize) { match u64::try_from(amount) { Ok(n) => self.free(n), Err(_) if self.max_alloc.is_some() => { panic!("max_alloc is set, we should have exited earlier when the reserve failed"); } Err(_) => { // Out of bounds, but we weren't asked to consider any limit. } } } } image-0.25.5/src/imageops/affine.rs000064400000000000000000000275141046102023000152010ustar 00000000000000//! Functions for performing affine transformations. use crate::error::{ImageError, ParameterError, ParameterErrorKind}; use crate::image::{GenericImage, GenericImageView}; use crate::traits::Pixel; use crate::ImageBuffer; /// Rotate an image 90 degrees clockwise. pub fn rotate90( image: &I, ) -> ImageBuffer::Subpixel>> where I::Pixel: 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(height, width); let _ = rotate90_in(image, &mut out); out } /// Rotate an image 180 degrees clockwise. pub fn rotate180( image: &I, ) -> ImageBuffer::Subpixel>> where I::Pixel: 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); let _ = rotate180_in(image, &mut out); out } /// Rotate an image 270 degrees clockwise. pub fn rotate270( image: &I, ) -> ImageBuffer::Subpixel>> where I::Pixel: 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(height, width); let _ = rotate270_in(image, &mut out); out } /// Rotate an image 90 degrees clockwise and put the result into the destination [`ImageBuffer`]. pub fn rotate90_in( image: &I, destination: &mut ImageBuffer, ) -> crate::ImageResult<()> where I: GenericImageView, I::Pixel: 'static, Container: std::ops::DerefMut::Subpixel]>, { let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions()); if w0 != h1 || h0 != w1 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } for y in 0..h0 { for x in 0..w0 { let p = image.get_pixel(x, y); destination.put_pixel(h0 - y - 1, x, p); } } Ok(()) } /// Rotate an image 180 degrees clockwise and put the result into the destination [`ImageBuffer`]. pub fn rotate180_in( image: &I, destination: &mut ImageBuffer, ) -> crate::ImageResult<()> where I: GenericImageView, I::Pixel: 'static, Container: std::ops::DerefMut::Subpixel]>, { let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions()); if w0 != w1 || h0 != h1 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } for y in 0..h0 { for x in 0..w0 { let p = image.get_pixel(x, y); destination.put_pixel(w0 - x - 1, h0 - y - 1, p); } } Ok(()) } /// Rotate an image 270 degrees clockwise and put the result into the destination [`ImageBuffer`]. pub fn rotate270_in( image: &I, destination: &mut ImageBuffer, ) -> crate::ImageResult<()> where I: GenericImageView, I::Pixel: 'static, Container: std::ops::DerefMut::Subpixel]>, { let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions()); if w0 != h1 || h0 != w1 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } for y in 0..h0 { for x in 0..w0 { let p = image.get_pixel(x, y); destination.put_pixel(y, w0 - x - 1, p); } } Ok(()) } /// Flip an image horizontally pub fn flip_horizontal( image: &I, ) -> ImageBuffer::Subpixel>> where I::Pixel: 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); let _ = flip_horizontal_in(image, &mut out); out } /// Flip an image vertically pub fn flip_vertical( image: &I, ) -> ImageBuffer::Subpixel>> where I::Pixel: 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); let _ = flip_vertical_in(image, &mut out); out } /// Flip an image horizontally and put the result into the destination [`ImageBuffer`]. pub fn flip_horizontal_in( image: &I, destination: &mut ImageBuffer, ) -> crate::ImageResult<()> where I: GenericImageView, I::Pixel: 'static, Container: std::ops::DerefMut::Subpixel]>, { let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions()); if w0 != w1 || h0 != h1 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } for y in 0..h0 { for x in 0..w0 { let p = image.get_pixel(x, y); destination.put_pixel(w0 - x - 1, y, p); } } Ok(()) } /// Flip an image vertically and put the result into the destination [`ImageBuffer`]. pub fn flip_vertical_in( image: &I, destination: &mut ImageBuffer, ) -> crate::ImageResult<()> where I: GenericImageView, I::Pixel: 'static, Container: std::ops::DerefMut::Subpixel]>, { let ((w0, h0), (w1, h1)) = (image.dimensions(), destination.dimensions()); if w0 != w1 || h0 != h1 { return Err(ImageError::Parameter(ParameterError::from_kind( ParameterErrorKind::DimensionMismatch, ))); } for y in 0..h0 { for x in 0..w0 { let p = image.get_pixel(x, y); destination.put_pixel(x, h0 - 1 - y, p); } } Ok(()) } /// Rotate an image 180 degrees clockwise in place. pub fn rotate180_in_place(image: &mut I) { let (width, height) = image.dimensions(); for y in 0..height / 2 { for x in 0..width { let p = image.get_pixel(x, y); let x2 = width - x - 1; let y2 = height - y - 1; let p2 = image.get_pixel(x2, y2); image.put_pixel(x, y, p2); image.put_pixel(x2, y2, p); } } if height % 2 != 0 { let middle = height / 2; for x in 0..width / 2 { let p = image.get_pixel(x, middle); let x2 = width - x - 1; let p2 = image.get_pixel(x2, middle); image.put_pixel(x, middle, p2); image.put_pixel(x2, middle, p); } } } /// Flip an image horizontally in place. pub fn flip_horizontal_in_place(image: &mut I) { let (width, height) = image.dimensions(); for y in 0..height { for x in 0..width / 2 { let x2 = width - x - 1; let p2 = image.get_pixel(x2, y); let p = image.get_pixel(x, y); image.put_pixel(x2, y, p); image.put_pixel(x, y, p2); } } } /// Flip an image vertically in place. pub fn flip_vertical_in_place(image: &mut I) { let (width, height) = image.dimensions(); for y in 0..height / 2 { for x in 0..width { let y2 = height - y - 1; let p2 = image.get_pixel(x, y2); let p = image.get_pixel(x, y); image.put_pixel(x, y2, p); image.put_pixel(x, y, p2); } } } #[cfg(test)] mod test { use super::{ flip_horizontal, flip_horizontal_in_place, flip_vertical, flip_vertical_in_place, rotate180, rotate180_in_place, rotate270, rotate90, }; use crate::image::GenericImage; use crate::traits::Pixel; use crate::{GrayImage, ImageBuffer}; macro_rules! assert_pixels_eq { ($actual:expr, $expected:expr) => {{ let actual_dim = $actual.dimensions(); let expected_dim = $expected.dimensions(); if actual_dim != expected_dim { panic!( "dimensions do not match. \ actual: {:?}, expected: {:?}", actual_dim, expected_dim ) } let diffs = pixel_diffs($actual, $expected); if !diffs.is_empty() { let mut err = "".to_string(); let diff_messages = diffs .iter() .take(5) .map(|d| format!("\nactual: {:?}, expected {:?} ", d.0, d.1)) .collect::>() .join(""); err.push_str(&diff_messages); panic!("pixels do not match. {:?}", err) } }}; } #[test] fn test_rotate90() { let image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(2, 3, vec![10u8, 0u8, 11u8, 1u8, 12u8, 2u8]).unwrap(); assert_pixels_eq!(&rotate90(&image), &expected); } #[test] fn test_rotate180() { let image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![12u8, 11u8, 10u8, 2u8, 1u8, 0u8]).unwrap(); assert_pixels_eq!(&rotate180(&image), &expected); } #[test] fn test_rotate270() { let image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(2, 3, vec![2u8, 12u8, 1u8, 11u8, 0u8, 10u8]).unwrap(); assert_pixels_eq!(&rotate270(&image), &expected); } #[test] fn test_rotate180_in_place() { let mut image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![12u8, 11u8, 10u8, 2u8, 1u8, 0u8]).unwrap(); rotate180_in_place(&mut image); assert_pixels_eq!(&image, &expected); } #[test] fn test_flip_horizontal() { let image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![2u8, 1u8, 0u8, 12u8, 11u8, 10u8]).unwrap(); assert_pixels_eq!(&flip_horizontal(&image), &expected); } #[test] fn test_flip_vertical() { let image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 0u8, 1u8, 2u8]).unwrap(); assert_pixels_eq!(&flip_vertical(&image), &expected); } #[test] fn test_flip_horizontal_in_place() { let mut image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![2u8, 1u8, 0u8, 12u8, 11u8, 10u8]).unwrap(); flip_horizontal_in_place(&mut image); assert_pixels_eq!(&image, &expected); } #[test] fn test_flip_vertical_in_place() { let mut image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 0u8, 1u8, 2u8]).unwrap(); flip_vertical_in_place(&mut image); assert_pixels_eq!(&image, &expected); } #[allow(clippy::type_complexity)] fn pixel_diffs(left: &I, right: &J) -> Vec<((u32, u32, P), (u32, u32, P))> where I: GenericImage, J: GenericImage, P: Pixel + Eq, { left.pixels() .zip(right.pixels()) .filter(|&(p, q)| p != q) .collect::>() } } image-0.25.5/src/imageops/colorops.rs000064400000000000000000000476061046102023000156150ustar 00000000000000//! Functions for altering and converting the color of pixelbufs use num_traits::NumCast; use std::f64::consts::PI; use crate::color::{FromColor, IntoColor, Luma, LumaA}; use crate::image::{GenericImage, GenericImageView}; use crate::traits::{Pixel, Primitive}; use crate::utils::clamp; use crate::ImageBuffer; type Subpixel = <::Pixel as Pixel>::Subpixel; /// Convert the supplied image to grayscale. Alpha channel is discarded. pub fn grayscale( image: &I, ) -> ImageBuffer>, Vec>> { grayscale_with_type(image) } /// Convert the supplied image to grayscale. Alpha channel is preserved. pub fn grayscale_alpha( image: &I, ) -> ImageBuffer>, Vec>> { grayscale_with_type_alpha(image) } /// Convert the supplied image to a grayscale image with the specified pixel type. Alpha channel is discarded. pub fn grayscale_with_type( image: &I, ) -> ImageBuffer> where NewPixel: Pixel + FromColor>>, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); for (x, y, pixel) in image.pixels() { let grayscale = pixel.to_luma(); let new_pixel = grayscale.into_color(); // no-op for luma->luma out.put_pixel(x, y, new_pixel); } out } /// Convert the supplied image to a grayscale image with the specified pixel type. Alpha channel is preserved. pub fn grayscale_with_type_alpha( image: &I, ) -> ImageBuffer> where NewPixel: Pixel + FromColor>>, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); for (x, y, pixel) in image.pixels() { let grayscale = pixel.to_luma_alpha(); let new_pixel = grayscale.into_color(); // no-op for luma->luma out.put_pixel(x, y, new_pixel); } out } /// Invert each pixel within the supplied image. /// This function operates in place. pub fn invert(image: &mut I) { // TODO find a way to use pixels? let (width, height) = image.dimensions(); for y in 0..height { for x in 0..width { let mut p = image.get_pixel(x, y); p.invert(); image.put_pixel(x, y, p); } } } /// Adjust the contrast of the supplied image. /// ```contrast``` is the amount to adjust the contrast by. /// Negative values decrease the contrast and positive values increase the contrast. /// /// *[See also `contrast_in_place`.][contrast_in_place]* pub fn contrast(image: &I, contrast: f32) -> ImageBuffer> where I: GenericImageView, P: Pixel + 'static, S: Primitive + 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); let max = S::DEFAULT_MAX_VALUE; let max: f32 = NumCast::from(max).unwrap(); let percent = ((100.0 + contrast) / 100.0).powi(2); for (x, y, pixel) in image.pixels() { let f = pixel.map(|b| { let c: f32 = NumCast::from(b).unwrap(); let d = ((c / max - 0.5) * percent + 0.5) * max; let e = clamp(d, 0.0, max); NumCast::from(e).unwrap() }); out.put_pixel(x, y, f); } out } /// Adjust the contrast of the supplied image in place. /// ```contrast``` is the amount to adjust the contrast by. /// Negative values decrease the contrast and positive values increase the contrast. /// /// *[See also `contrast`.][contrast]* pub fn contrast_in_place(image: &mut I, contrast: f32) where I: GenericImage, { let (width, height) = image.dimensions(); let max = ::Subpixel::DEFAULT_MAX_VALUE; let max: f32 = NumCast::from(max).unwrap(); let percent = ((100.0 + contrast) / 100.0).powi(2); // TODO find a way to use pixels? for y in 0..height { for x in 0..width { let f = image.get_pixel(x, y).map(|b| { let c: f32 = NumCast::from(b).unwrap(); let d = ((c / max - 0.5) * percent + 0.5) * max; let e = clamp(d, 0.0, max); NumCast::from(e).unwrap() }); image.put_pixel(x, y, f); } } } /// Brighten the supplied image. /// ```value``` is the amount to brighten each pixel by. /// Negative values decrease the brightness and positive values increase it. /// /// *[See also `brighten_in_place`.][brighten_in_place]* pub fn brighten(image: &I, value: i32) -> ImageBuffer> where I: GenericImageView, P: Pixel + 'static, S: Primitive + 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); let max = S::DEFAULT_MAX_VALUE; let max: i32 = NumCast::from(max).unwrap(); for (x, y, pixel) in image.pixels() { let e = pixel.map_with_alpha( |b| { let c: i32 = NumCast::from(b).unwrap(); let d = clamp(c + value, 0, max); NumCast::from(d).unwrap() }, |alpha| alpha, ); out.put_pixel(x, y, e); } out } /// Brighten the supplied image in place. /// ```value``` is the amount to brighten each pixel by. /// Negative values decrease the brightness and positive values increase it. /// /// *[See also `brighten`.][brighten]* pub fn brighten_in_place(image: &mut I, value: i32) where I: GenericImage, { let (width, height) = image.dimensions(); let max = ::Subpixel::DEFAULT_MAX_VALUE; let max: i32 = NumCast::from(max).unwrap(); // TODO what does this do for f32? clamp at 1?? // TODO find a way to use pixels? for y in 0..height { for x in 0..width { let e = image.get_pixel(x, y).map_with_alpha( |b| { let c: i32 = NumCast::from(b).unwrap(); let d = clamp(c + value, 0, max); NumCast::from(d).unwrap() }, |alpha| alpha, ); image.put_pixel(x, y, e); } } } /// Hue rotate the supplied image. /// `value` is the degrees to rotate each pixel by. /// 0 and 360 do nothing, the rest rotates by the given degree value. /// just like the css webkit filter hue-rotate(180) /// /// *[See also `huerotate_in_place`.][huerotate_in_place]* pub fn huerotate(image: &I, value: i32) -> ImageBuffer> where I: GenericImageView, P: Pixel + 'static, S: Primitive + 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); let angle: f64 = NumCast::from(value).unwrap(); let cosv = (angle * PI / 180.0).cos(); let sinv = (angle * PI / 180.0).sin(); let matrix: [f64; 9] = [ // Reds 0.213 + cosv * 0.787 - sinv * 0.213, 0.715 - cosv * 0.715 - sinv * 0.715, 0.072 - cosv * 0.072 + sinv * 0.928, // Greens 0.213 - cosv * 0.213 + sinv * 0.143, 0.715 + cosv * 0.285 + sinv * 0.140, 0.072 - cosv * 0.072 - sinv * 0.283, // Blues 0.213 - cosv * 0.213 - sinv * 0.787, 0.715 - cosv * 0.715 + sinv * 0.715, 0.072 + cosv * 0.928 + sinv * 0.072, ]; for (x, y, pixel) in out.enumerate_pixels_mut() { let p = image.get_pixel(x, y); #[allow(deprecated)] let (k1, k2, k3, k4) = p.channels4(); let vec: (f64, f64, f64, f64) = ( NumCast::from(k1).unwrap(), NumCast::from(k2).unwrap(), NumCast::from(k3).unwrap(), NumCast::from(k4).unwrap(), ); let r = vec.0; let g = vec.1; let b = vec.2; let new_r = matrix[0] * r + matrix[1] * g + matrix[2] * b; let new_g = matrix[3] * r + matrix[4] * g + matrix[5] * b; let new_b = matrix[6] * r + matrix[7] * g + matrix[8] * b; let max = 255f64; #[allow(deprecated)] let outpixel = Pixel::from_channels( NumCast::from(clamp(new_r, 0.0, max)).unwrap(), NumCast::from(clamp(new_g, 0.0, max)).unwrap(), NumCast::from(clamp(new_b, 0.0, max)).unwrap(), NumCast::from(clamp(vec.3, 0.0, max)).unwrap(), ); *pixel = outpixel; } out } /// Hue rotate the supplied image in place. /// /// `value` is the degrees to rotate each pixel by. /// 0 and 360 do nothing, the rest rotates by the given degree value. /// just like the css webkit filter hue-rotate(180) /// /// *[See also `huerotate`.][huerotate]* pub fn huerotate_in_place(image: &mut I, value: i32) where I: GenericImage, { let (width, height) = image.dimensions(); let angle: f64 = NumCast::from(value).unwrap(); let cosv = (angle * PI / 180.0).cos(); let sinv = (angle * PI / 180.0).sin(); let matrix: [f64; 9] = [ // Reds 0.213 + cosv * 0.787 - sinv * 0.213, 0.715 - cosv * 0.715 - sinv * 0.715, 0.072 - cosv * 0.072 + sinv * 0.928, // Greens 0.213 - cosv * 0.213 + sinv * 0.143, 0.715 + cosv * 0.285 + sinv * 0.140, 0.072 - cosv * 0.072 - sinv * 0.283, // Blues 0.213 - cosv * 0.213 - sinv * 0.787, 0.715 - cosv * 0.715 + sinv * 0.715, 0.072 + cosv * 0.928 + sinv * 0.072, ]; // TODO find a way to use pixels? for y in 0..height { for x in 0..width { let pixel = image.get_pixel(x, y); #[allow(deprecated)] let (k1, k2, k3, k4) = pixel.channels4(); let vec: (f64, f64, f64, f64) = ( NumCast::from(k1).unwrap(), NumCast::from(k2).unwrap(), NumCast::from(k3).unwrap(), NumCast::from(k4).unwrap(), ); let r = vec.0; let g = vec.1; let b = vec.2; let new_r = matrix[0] * r + matrix[1] * g + matrix[2] * b; let new_g = matrix[3] * r + matrix[4] * g + matrix[5] * b; let new_b = matrix[6] * r + matrix[7] * g + matrix[8] * b; let max = 255f64; #[allow(deprecated)] let outpixel = Pixel::from_channels( NumCast::from(clamp(new_r, 0.0, max)).unwrap(), NumCast::from(clamp(new_g, 0.0, max)).unwrap(), NumCast::from(clamp(new_b, 0.0, max)).unwrap(), NumCast::from(clamp(vec.3, 0.0, max)).unwrap(), ); image.put_pixel(x, y, outpixel); } } } /// A color map pub trait ColorMap { /// The color type on which the map operates on type Color; /// Returns the index of the closest match of `color` /// in the color map. fn index_of(&self, color: &Self::Color) -> usize; /// Looks up color by index in the color map. If `idx` is out of range for the color map, or /// `ColorMap` doesn't implement `lookup` `None` is returned. fn lookup(&self, index: usize) -> Option { let _ = index; None } /// Determine if this implementation of `ColorMap` overrides the default `lookup`. fn has_lookup(&self) -> bool { false } /// Maps `color` to the closest color in the color map. fn map_color(&self, color: &mut Self::Color); } /// A bi-level color map /// /// # Examples /// ``` /// use image::imageops::colorops::{index_colors, BiLevel, ColorMap}; /// use image::{ImageBuffer, Luma}; /// /// let (w, h) = (16, 16); /// // Create an image with a smooth horizontal gradient from black (0) to white (255). /// let gray = ImageBuffer::from_fn(w, h, |x, y| -> Luma { [(255 * x / w) as u8].into() }); /// // Mapping the gray image through the `BiLevel` filter should map gray pixels less than half /// // intensity (127) to black (0), and anything greater to white (255). /// let cmap = BiLevel; /// let palletized = index_colors(&gray, &cmap); /// let mapped = ImageBuffer::from_fn(w, h, |x, y| { /// let p = palletized.get_pixel(x, y); /// cmap.lookup(p.0[0] as usize) /// .expect("indexed color out-of-range") /// }); /// // Create an black and white image of expected output. /// let bw = ImageBuffer::from_fn(w, h, |x, y| -> Luma { /// if x <= (w / 2) { /// [0].into() /// } else { /// [255].into() /// } /// }); /// assert_eq!(mapped, bw); /// ``` #[derive(Clone, Copy)] pub struct BiLevel; impl ColorMap for BiLevel { type Color = Luma; #[inline(always)] fn index_of(&self, color: &Luma) -> usize { let luma = color.0; if luma[0] > 127 { 1 } else { 0 } } #[inline(always)] fn lookup(&self, idx: usize) -> Option { match idx { 0 => Some([0].into()), 1 => Some([255].into()), _ => None, } } /// Indicate `NeuQuant` implements `lookup`. fn has_lookup(&self) -> bool { true } #[inline(always)] fn map_color(&self, color: &mut Luma) { let new_color = 0xFF * self.index_of(color) as u8; let luma = &mut color.0; luma[0] = new_color; } } #[cfg(feature = "color_quant")] impl ColorMap for color_quant::NeuQuant { type Color = crate::color::Rgba; #[inline(always)] fn index_of(&self, color: &Self::Color) -> usize { self.index_of(color.channels()) } #[inline(always)] fn lookup(&self, idx: usize) -> Option { self.lookup(idx).map(|p| p.into()) } /// Indicate NeuQuant implements `lookup`. fn has_lookup(&self) -> bool { true } #[inline(always)] fn map_color(&self, color: &mut Self::Color) { self.map_pixel(color.channels_mut()) } } /// Floyd-Steinberg error diffusion fn diffuse_err>(pixel: &mut P, error: [i16; 3], factor: i16) { for (e, c) in error.iter().zip(pixel.channels_mut().iter_mut()) { *c = match >::from(*c) + e * factor / 16 { val if val < 0 => 0, val if val > 0xFF => 0xFF, val => val as u8, } } } macro_rules! do_dithering( ($map:expr, $image:expr, $err:expr, $x:expr, $y:expr) => ( { let old_pixel = $image[($x, $y)]; let new_pixel = $image.get_pixel_mut($x, $y); $map.map_color(new_pixel); for ((e, &old), &new) in $err.iter_mut() .zip(old_pixel.channels().iter()) .zip(new_pixel.channels().iter()) { *e = >::from(old) - >::from(new) } } ) ); /// Reduces the colors of the image using the supplied `color_map` while applying /// Floyd-Steinberg dithering to improve the visual conception pub fn dither(image: &mut ImageBuffer>, color_map: &Map) where Map: ColorMap + ?Sized, Pix: Pixel + 'static, { let (width, height) = image.dimensions(); let mut err: [i16; 3] = [0; 3]; for y in 0..height - 1 { let x = 0; do_dithering!(color_map, image, err, x, y); diffuse_err(image.get_pixel_mut(x + 1, y), err, 7); diffuse_err(image.get_pixel_mut(x, y + 1), err, 5); diffuse_err(image.get_pixel_mut(x + 1, y + 1), err, 1); for x in 1..width - 1 { do_dithering!(color_map, image, err, x, y); diffuse_err(image.get_pixel_mut(x + 1, y), err, 7); diffuse_err(image.get_pixel_mut(x - 1, y + 1), err, 3); diffuse_err(image.get_pixel_mut(x, y + 1), err, 5); diffuse_err(image.get_pixel_mut(x + 1, y + 1), err, 1); } let x = width - 1; do_dithering!(color_map, image, err, x, y); diffuse_err(image.get_pixel_mut(x - 1, y + 1), err, 3); diffuse_err(image.get_pixel_mut(x, y + 1), err, 5); } let y = height - 1; let x = 0; do_dithering!(color_map, image, err, x, y); diffuse_err(image.get_pixel_mut(x + 1, y), err, 7); for x in 1..width - 1 { do_dithering!(color_map, image, err, x, y); diffuse_err(image.get_pixel_mut(x + 1, y), err, 7); } let x = width - 1; do_dithering!(color_map, image, err, x, y); } /// Reduces the colors using the supplied `color_map` and returns an image of the indices pub fn index_colors( image: &ImageBuffer>, color_map: &Map, ) -> ImageBuffer, Vec> where Map: ColorMap + ?Sized, Pix: Pixel + 'static, { let mut indices = ImageBuffer::new(image.width(), image.height()); for (pixel, idx) in image.pixels().zip(indices.pixels_mut()) { *idx = Luma([color_map.index_of(pixel) as u8]); } indices } #[cfg(test)] mod test { use super::*; use crate::GrayImage; macro_rules! assert_pixels_eq { ($actual:expr, $expected:expr) => {{ let actual_dim = $actual.dimensions(); let expected_dim = $expected.dimensions(); if actual_dim != expected_dim { panic!( "dimensions do not match. \ actual: {:?}, expected: {:?}", actual_dim, expected_dim ) } let diffs = pixel_diffs($actual, $expected); if !diffs.is_empty() { let mut err = "".to_string(); let diff_messages = diffs .iter() .take(5) .map(|d| format!("\nactual: {:?}, expected {:?} ", d.0, d.1)) .collect::>() .join(""); err.push_str(&diff_messages); panic!("pixels do not match. {:?}", err) } }}; } #[test] fn test_dither() { let mut image = ImageBuffer::from_raw(2, 2, vec![127, 127, 127, 127]).unwrap(); let cmap = BiLevel; dither(&mut image, &cmap); assert_eq!(&*image, &[0, 0xFF, 0xFF, 0]); assert_eq!(index_colors(&image, &cmap).into_raw(), vec![0, 1, 1, 0]) } #[test] fn test_grayscale() { let image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); assert_pixels_eq!(&grayscale(&image), &expected); } #[test] fn test_invert() { let mut image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![255u8, 254u8, 253u8, 245u8, 244u8, 243u8]).unwrap(); invert(&mut image); assert_pixels_eq!(&image, &expected); } #[test] fn test_brighten() { let image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 20u8, 21u8, 22u8]).unwrap(); assert_pixels_eq!(&brighten(&image, 10), &expected); } #[test] fn test_brighten_place() { let mut image: GrayImage = ImageBuffer::from_raw(3, 2, vec![0u8, 1u8, 2u8, 10u8, 11u8, 12u8]).unwrap(); let expected: GrayImage = ImageBuffer::from_raw(3, 2, vec![10u8, 11u8, 12u8, 20u8, 21u8, 22u8]).unwrap(); brighten_in_place(&mut image, 10); assert_pixels_eq!(&image, &expected); } #[allow(clippy::type_complexity)] fn pixel_diffs(left: &I, right: &J) -> Vec<((u32, u32, P), (u32, u32, P))> where I: GenericImage, J: GenericImage, P: Pixel + Eq, { left.pixels() .zip(right.pixels()) .filter(|&(p, q)| p != q) .collect::>() } } image-0.25.5/src/imageops/fast_blur.rs000064400000000000000000000112111046102023000157150ustar 00000000000000use num_traits::clamp; use crate::{ImageBuffer, Pixel, Primitive}; /// Approximation of Gaussian blur after /// Kovesi, P.: Fast Almost-Gaussian Filtering The Australian Pattern /// Recognition Society Conference: DICTA 2010. December 2010. Sydney. pub fn fast_blur( image_buffer: &ImageBuffer>, sigma: f32, ) -> ImageBuffer> { let (width, height) = image_buffer.dimensions(); if width == 0 || height == 0 { return image_buffer.clone(); } let mut samples = image_buffer.as_flat_samples().samples.to_vec(); let num_passes = 3; let boxes = boxes_for_gauss(sigma, num_passes); for radius in boxes.iter().take(num_passes) { let horizontally_blurred_transposed = horizontal_fast_blur_half::( &samples, width as usize, height as usize, (*radius - 1) / 2, P::CHANNEL_COUNT as usize, ); samples = horizontal_fast_blur_half::( &horizontally_blurred_transposed, height as usize, width as usize, (*radius - 1) / 2, P::CHANNEL_COUNT as usize, ); } ImageBuffer::from_raw(width, height, samples).unwrap() } fn boxes_for_gauss(sigma: f32, n: usize) -> Vec { let w_ideal = f32::sqrt((12.0 * sigma.powi(2) / (n as f32)) + 1.0); let mut w_l = w_ideal.floor(); if w_l % 2.0 == 0.0 { w_l -= 1.0 }; let w_u = w_l + 2.0; let m_ideal = 0.25 * (n as f32) * (w_l + 3.0) - 3.0 * sigma.powi(2) * (w_l + 1.0).recip(); let m = f32::round(m_ideal) as usize; (0..n) .map(|i| if i < m { w_l as usize } else { w_u as usize }) .collect::>() } fn channel_idx(channel: usize, idx: usize, channel_num: usize) -> usize { channel_num * idx + channel } fn horizontal_fast_blur_half( samples: &[P], width: usize, height: usize, r: usize, channel_num: usize, ) -> Vec

{ let channel_size = width * height; let mut out_samples = vec![P::from(0).unwrap(); channel_size * channel_num]; let mut vals = vec![0.0; channel_num]; let min_value = P::DEFAULT_MIN_VALUE.to_f32().unwrap(); let max_value = P::DEFAULT_MAX_VALUE.to_f32().unwrap(); for row in 0..height { for (channel, value) in vals.iter_mut().enumerate().take(channel_num) { *value = ((-(r as isize))..(r + 1) as isize) .map(|x| { extended_f( samples, width, height, x, row as isize, channel, channel_num, ) .to_f32() .unwrap_or(0.0) }) .sum() } for column in 0..width { for (channel, channel_val) in vals.iter_mut().enumerate() { let val = *channel_val / (2.0 * r as f32 + 1.0); let val = clamp(val, min_value, max_value); let val = P::from(val).unwrap(); let destination_row = column; let destination_column = row; let destination_sample_index = channel_idx( channel, destination_column + destination_row * height, channel_num, ); out_samples[destination_sample_index] = val; *channel_val = *channel_val - extended_f( samples, width, height, column as isize - r as isize, row as isize, channel, channel_num, ) .to_f32() .unwrap_or(0.0) + extended_f( samples, width, height, { column + r + 1 } as isize, row as isize, channel, channel_num, ) .to_f32() .unwrap_or(0.0) } } } out_samples } fn extended_f( samples: &[P], width: usize, height: usize, x: isize, y: isize, channel: usize, channel_num: usize, ) -> P { let x = clamp(x, 0, width as isize - 1) as usize; let y = clamp(y, 0, height as isize - 1) as usize; samples[channel_idx(channel, y * width + x, channel_num)] } image-0.25.5/src/imageops/mod.rs000064400000000000000000000453401046102023000145250ustar 00000000000000//! Image Processing Functions use std::cmp; use crate::image::{GenericImage, GenericImageView, SubImage}; use crate::traits::{Lerp, Pixel, Primitive}; pub use self::sample::FilterType; pub use self::sample::FilterType::{CatmullRom, Gaussian, Lanczos3, Nearest, Triangle}; /// Affine transformations pub use self::affine::{ flip_horizontal, flip_horizontal_in, flip_horizontal_in_place, flip_vertical, flip_vertical_in, flip_vertical_in_place, rotate180, rotate180_in, rotate180_in_place, rotate270, rotate270_in, rotate90, rotate90_in, }; /// Image sampling pub use self::sample::{ blur, filter3x3, interpolate_bilinear, interpolate_nearest, resize, sample_bilinear, sample_nearest, thumbnail, unsharpen, }; /// Color operations pub use self::colorops::{ brighten, contrast, dither, grayscale, grayscale_alpha, grayscale_with_type, grayscale_with_type_alpha, huerotate, index_colors, invert, BiLevel, ColorMap, }; mod affine; // Public only because of Rust bug: // https://github.com/rust-lang/rust/issues/18241 pub mod colorops; mod fast_blur; mod sample; pub use fast_blur::fast_blur; /// Return a mutable view into an image /// The coordinates set the position of the top left corner of the crop. pub fn crop( image: &mut I, x: u32, y: u32, width: u32, height: u32, ) -> SubImage<&mut I> { let (x, y, width, height) = crop_dimms(image, x, y, width, height); SubImage::new(image, x, y, width, height) } /// Return an immutable view into an image /// The coordinates set the position of the top left corner of the crop. pub fn crop_imm( image: &I, x: u32, y: u32, width: u32, height: u32, ) -> SubImage<&I> { let (x, y, width, height) = crop_dimms(image, x, y, width, height); SubImage::new(image, x, y, width, height) } fn crop_dimms( image: &I, x: u32, y: u32, width: u32, height: u32, ) -> (u32, u32, u32, u32) { let (iwidth, iheight) = image.dimensions(); let x = cmp::min(x, iwidth); let y = cmp::min(y, iheight); let height = cmp::min(height, iheight - y); let width = cmp::min(width, iwidth - x); (x, y, width, height) } /// Calculate the region that can be copied from top to bottom. /// /// Given image size of bottom and top image, and a point at which we want to place the top image /// onto the bottom image, how large can we be? Have to wary of the following issues: /// * Top might be larger than bottom /// * Overflows in the computation /// * Coordinates could be completely out of bounds /// /// The main idea is to make use of inequalities provided by the nature of `saturating_add` and /// `saturating_sub`. These intrinsically validate that all resulting coordinates will be in bounds /// for both images. /// /// We want that all these coordinate accesses are safe: /// 1. `bottom.get_pixel(x + [0..x_range), y + [0..y_range))` /// 2. `top.get_pixel([0..x_range), [0..y_range))` /// /// Proof that the function provides the necessary bounds for width. Note that all unaugmented math /// operations are to be read in standard arithmetic, not integer arithmetic. Since no direct /// integer arithmetic occurs in the implementation, this is unambiguous. /// /// ```text /// Three short notes/lemmata: /// - Iff `(a - b) <= 0` then `a.saturating_sub(b) = 0` /// - Iff `(a - b) >= 0` then `a.saturating_sub(b) = a - b` /// - If `a <= c` then `a.saturating_sub(b) <= c.saturating_sub(b)` /// /// 1.1 We show that if `bottom_width <= x`, then `x_range = 0` therefore `x + [0..x_range)` is empty. /// /// x_range /// = (top_width.saturating_add(x).min(bottom_width)).saturating_sub(x) /// <= bottom_width.saturating_sub(x) /// /// bottom_width <= x /// <==> bottom_width - x <= 0 /// <==> bottom_width.saturating_sub(x) = 0 /// ==> x_range <= 0 /// ==> x_range = 0 /// /// 1.2 If `x < bottom_width` then `x + x_range < bottom_width` /// /// x + x_range /// <= x + bottom_width.saturating_sub(x) /// = x + (bottom_width - x) /// = bottom_width /// /// 2. We show that `x_range <= top_width` /// /// x_range /// = (top_width.saturating_add(x).min(bottom_width)).saturating_sub(x) /// <= top_width.saturating_add(x).saturating_sub(x) /// <= (top_wdith + x).saturating_sub(x) /// = top_width (due to `top_width >= 0` and `x >= 0`) /// ``` /// /// Proof is the same for height. #[must_use] pub fn overlay_bounds( (bottom_width, bottom_height): (u32, u32), (top_width, top_height): (u32, u32), x: u32, y: u32, ) -> (u32, u32) { let x_range = top_width .saturating_add(x) // Calculate max coordinate .min(bottom_width) // Restrict to lower width .saturating_sub(x); // Determinate length from start `x` let y_range = top_height .saturating_add(y) .min(bottom_height) .saturating_sub(y); (x_range, y_range) } /// Calculate the region that can be copied from top to bottom. /// /// Given image size of bottom and top image, and a point at which we want to place the top image /// onto the bottom image, how large can we be? Have to wary of the following issues: /// * Top might be larger than bottom /// * Overflows in the computation /// * Coordinates could be completely out of bounds /// /// The returned value is of the form: /// /// `(origin_bottom_x, origin_bottom_y, origin_top_x, origin_top_y, x_range, y_range)` /// /// The main idea is to do computations on i64's and then clamp to image dimensions. /// In particular, we want to ensure that all these coordinate accesses are safe: /// 1. `bottom.get_pixel(origin_bottom_x + [0..x_range), origin_bottom_y + [0..y_range))` /// 2. `top.get_pixel(origin_top_y + [0..x_range), origin_top_y + [0..y_range))` /// fn overlay_bounds_ext( (bottom_width, bottom_height): (u32, u32), (top_width, top_height): (u32, u32), x: i64, y: i64, ) -> (u32, u32, u32, u32, u32, u32) { // Return a predictable value if the two images don't overlap at all. if x > i64::from(bottom_width) || y > i64::from(bottom_height) || x.saturating_add(i64::from(top_width)) <= 0 || y.saturating_add(i64::from(top_height)) <= 0 { return (0, 0, 0, 0, 0, 0); } // Find the maximum x and y coordinates in terms of the bottom image. let max_x = x.saturating_add(i64::from(top_width)); let max_y = y.saturating_add(i64::from(top_height)); // Clip the origin and maximum coordinates to the bounds of the bottom image. // Casting to a u32 is safe because both 0 and `bottom_{width,height}` fit // into 32-bits. let max_inbounds_x = max_x.clamp(0, i64::from(bottom_width)) as u32; let max_inbounds_y = max_y.clamp(0, i64::from(bottom_height)) as u32; let origin_bottom_x = x.clamp(0, i64::from(bottom_width)) as u32; let origin_bottom_y = y.clamp(0, i64::from(bottom_height)) as u32; // The range is the difference between the maximum inbounds coordinates and // the clipped origin. Unchecked subtraction is safe here because both are // always positive and `max_inbounds_{x,y}` >= `origin_{x,y}` due to // `top_{width,height}` being >= 0. let x_range = max_inbounds_x - origin_bottom_x; let y_range = max_inbounds_y - origin_bottom_y; // If x (or y) is negative, then the origin of the top image is shifted by -x (or -y). let origin_top_x = x.saturating_mul(-1).clamp(0, i64::from(top_width)) as u32; let origin_top_y = y.saturating_mul(-1).clamp(0, i64::from(top_height)) as u32; ( origin_bottom_x, origin_bottom_y, origin_top_x, origin_top_y, x_range, y_range, ) } /// Overlay an image at a given coordinate (x, y) pub fn overlay(bottom: &mut I, top: &J, x: i64, y: i64) where I: GenericImage, J: GenericImageView, { let bottom_dims = bottom.dimensions(); let top_dims = top.dimensions(); // Crop our top image if we're going out of bounds let (origin_bottom_x, origin_bottom_y, origin_top_x, origin_top_y, range_width, range_height) = overlay_bounds_ext(bottom_dims, top_dims, x, y); for y in 0..range_height { for x in 0..range_width { let p = top.get_pixel(origin_top_x + x, origin_top_y + y); let mut bottom_pixel = bottom.get_pixel(origin_bottom_x + x, origin_bottom_y + y); bottom_pixel.blend(&p); bottom.put_pixel(origin_bottom_x + x, origin_bottom_y + y, bottom_pixel); } } } /// Tile an image by repeating it multiple times /// /// # Examples /// ```no_run /// use image::{RgbaImage}; /// /// let mut img = RgbaImage::new(1920, 1080); /// let tile = image::open("tile.png").unwrap(); /// /// image::imageops::tile(&mut img, &tile); /// img.save("tiled_wallpaper.png").unwrap(); /// ``` pub fn tile(bottom: &mut I, top: &J) where I: GenericImage, J: GenericImageView, { for x in (0..bottom.width()).step_by(top.width() as usize) { for y in (0..bottom.height()).step_by(top.height() as usize) { overlay(bottom, top, i64::from(x), i64::from(y)); } } } /// Fill the image with a linear vertical gradient /// /// This function assumes a linear color space. /// /// # Examples /// ```no_run /// use image::{Rgba, RgbaImage, Pixel}; /// /// let mut img = RgbaImage::new(100, 100); /// let start = Rgba::from_slice(&[0, 128, 0, 0]); /// let end = Rgba::from_slice(&[255, 255, 255, 255]); /// /// image::imageops::vertical_gradient(&mut img, start, end); /// img.save("vertical_gradient.png").unwrap(); pub fn vertical_gradient(img: &mut I, start: &P, stop: &P) where I: GenericImage, P: Pixel + 'static, S: Primitive + Lerp + 'static, { for y in 0..img.height() { let pixel = start.map2(stop, |a, b| { let y = ::from(y).unwrap(); let height = ::from(img.height() - 1).unwrap(); S::lerp(a, b, y / height) }); for x in 0..img.width() { img.put_pixel(x, y, pixel); } } } /// Fill the image with a linear horizontal gradient /// /// This function assumes a linear color space. /// /// # Examples /// ```no_run /// use image::{Rgba, RgbaImage, Pixel}; /// /// let mut img = RgbaImage::new(100, 100); /// let start = Rgba::from_slice(&[0, 128, 0, 0]); /// let end = Rgba::from_slice(&[255, 255, 255, 255]); /// /// image::imageops::horizontal_gradient(&mut img, start, end); /// img.save("horizontal_gradient.png").unwrap(); pub fn horizontal_gradient(img: &mut I, start: &P, stop: &P) where I: GenericImage, P: Pixel + 'static, S: Primitive + Lerp + 'static, { for x in 0..img.width() { let pixel = start.map2(stop, |a, b| { let x = ::from(x).unwrap(); let width = ::from(img.width() - 1).unwrap(); S::lerp(a, b, x / width) }); for y in 0..img.height() { img.put_pixel(x, y, pixel); } } } /// Replace the contents of an image at a given coordinate (x, y) pub fn replace(bottom: &mut I, top: &J, x: i64, y: i64) where I: GenericImage, J: GenericImageView, { let bottom_dims = bottom.dimensions(); let top_dims = top.dimensions(); // Crop our top image if we're going out of bounds let (origin_bottom_x, origin_bottom_y, origin_top_x, origin_top_y, range_width, range_height) = overlay_bounds_ext(bottom_dims, top_dims, x, y); for y in 0..range_height { for x in 0..range_width { let p = top.get_pixel(origin_top_x + x, origin_top_y + y); bottom.put_pixel(origin_bottom_x + x, origin_bottom_y + y, p); } } } #[cfg(test)] mod tests { use super::*; use crate::color::Rgb; use crate::GrayAlphaImage; use crate::GrayImage; use crate::ImageBuffer; use crate::RgbImage; use crate::RgbaImage; #[test] fn test_overlay_bounds_ext() { assert_eq!( overlay_bounds_ext((10, 10), (10, 10), 0, 0), (0, 0, 0, 0, 10, 10) ); assert_eq!( overlay_bounds_ext((10, 10), (10, 10), 1, 0), (1, 0, 0, 0, 9, 10) ); assert_eq!( overlay_bounds_ext((10, 10), (10, 10), 0, 11), (0, 0, 0, 0, 0, 0) ); assert_eq!( overlay_bounds_ext((10, 10), (10, 10), -1, 0), (0, 0, 1, 0, 9, 10) ); assert_eq!( overlay_bounds_ext((10, 10), (10, 10), -10, 0), (0, 0, 0, 0, 0, 0) ); assert_eq!( overlay_bounds_ext((10, 10), (10, 10), 1i64 << 50, 0), (0, 0, 0, 0, 0, 0) ); assert_eq!( overlay_bounds_ext((10, 10), (10, 10), -(1i64 << 50), 0), (0, 0, 0, 0, 0, 0) ); assert_eq!( overlay_bounds_ext((10, 10), (u32::MAX, 10), 10 - i64::from(u32::MAX), 0), (0, 0, u32::MAX - 10, 0, 10, 10) ); } #[test] /// Test that images written into other images works fn test_image_in_image() { let mut target = ImageBuffer::new(32, 32); let source = ImageBuffer::from_pixel(16, 16, Rgb([255u8, 0, 0])); overlay(&mut target, &source, 0, 0); assert!(*target.get_pixel(0, 0) == Rgb([255u8, 0, 0])); assert!(*target.get_pixel(15, 0) == Rgb([255u8, 0, 0])); assert!(*target.get_pixel(16, 0) == Rgb([0u8, 0, 0])); assert!(*target.get_pixel(0, 15) == Rgb([255u8, 0, 0])); assert!(*target.get_pixel(0, 16) == Rgb([0u8, 0, 0])); } #[test] /// Test that images written outside of a frame doesn't blow up fn test_image_in_image_outside_of_bounds() { let mut target = ImageBuffer::new(32, 32); let source = ImageBuffer::from_pixel(32, 32, Rgb([255u8, 0, 0])); overlay(&mut target, &source, 1, 1); assert!(*target.get_pixel(0, 0) == Rgb([0, 0, 0])); assert!(*target.get_pixel(1, 1) == Rgb([255u8, 0, 0])); assert!(*target.get_pixel(31, 31) == Rgb([255u8, 0, 0])); } #[test] /// Test that images written to coordinates out of the frame doesn't blow up /// (issue came up in #848) fn test_image_outside_image_no_wrap_around() { let mut target = ImageBuffer::new(32, 32); let source = ImageBuffer::from_pixel(32, 32, Rgb([255u8, 0, 0])); overlay(&mut target, &source, 33, 33); assert!(*target.get_pixel(0, 0) == Rgb([0, 0, 0])); assert!(*target.get_pixel(1, 1) == Rgb([0, 0, 0])); assert!(*target.get_pixel(31, 31) == Rgb([0, 0, 0])); } #[test] /// Test that images written to coordinates with overflow works fn test_image_coordinate_overflow() { let mut target = ImageBuffer::new(16, 16); let source = ImageBuffer::from_pixel(32, 32, Rgb([255u8, 0, 0])); // Overflows to 'sane' coordinates but top is larger than bot. overlay( &mut target, &source, i64::from(u32::MAX - 31), i64::from(u32::MAX - 31), ); assert!(*target.get_pixel(0, 0) == Rgb([0, 0, 0])); assert!(*target.get_pixel(1, 1) == Rgb([0, 0, 0])); assert!(*target.get_pixel(15, 15) == Rgb([0, 0, 0])); } use super::{horizontal_gradient, vertical_gradient}; #[test] /// Test that horizontal gradients are correctly generated fn test_image_horizontal_gradient_limits() { let mut img = ImageBuffer::new(100, 1); let start = Rgb([0u8, 128, 0]); let end = Rgb([255u8, 255, 255]); horizontal_gradient(&mut img, &start, &end); assert_eq!(img.get_pixel(0, 0), &start); assert_eq!(img.get_pixel(img.width() - 1, 0), &end); } #[test] /// Test that vertical gradients are correctly generated fn test_image_vertical_gradient_limits() { let mut img = ImageBuffer::new(1, 100); let start = Rgb([0u8, 128, 0]); let end = Rgb([255u8, 255, 255]); vertical_gradient(&mut img, &start, &end); assert_eq!(img.get_pixel(0, 0), &start); assert_eq!(img.get_pixel(0, img.height() - 1), &end); } #[test] /// Test blur doesn't panic when passed 0.0 fn test_blur_zero() { let image = RgbaImage::new(50, 50); let _ = blur(&image, 0.0); } #[test] /// Test fast blur doesn't panic when passed 0.0 fn test_fast_blur_zero() { let image = RgbaImage::new(50, 50); let _ = fast_blur(&image, 0.0); } #[test] /// Test fast blur doesn't panic when passed negative numbers fn test_fast_blur_negative() { let image = RgbaImage::new(50, 50); let _ = fast_blur(&image, -1.0); } #[test] /// Test fast blur doesn't panic when sigma produces boxes larger than the image fn test_fast_large_sigma() { let image = RgbaImage::new(1, 1); let _ = fast_blur(&image, 50.0); } #[test] /// Test blur doesn't panic when passed an empty image (any direction) fn test_fast_blur_empty() { let image = RgbaImage::new(0, 0); let _ = fast_blur(&image, 1.0); let image = RgbaImage::new(20, 0); let _ = fast_blur(&image, 1.0); let image = RgbaImage::new(0, 20); let _ = fast_blur(&image, 1.0); } #[test] /// Test fast blur works with 3 channels fn test_fast_blur_3_channels() { let image = RgbImage::new(50, 50); let _ = fast_blur(&image, 1.0); } #[test] /// Test fast blur works with 2 channels fn test_fast_blur_2_channels() { let image = GrayAlphaImage::new(50, 50); let _ = fast_blur(&image, 1.0); } #[test] /// Test fast blur works with 1 channel fn test_fast_blur_1_channels() { let image = GrayImage::new(50, 50); let _ = fast_blur(&image, 1.0); } #[test] #[cfg(feature = "tiff")] fn fast_blur_approximates_gaussian_blur_well() { let path = concat!( env!("CARGO_MANIFEST_DIR"), "/tests/images/tiff/testsuite/rgb-3c-16b.tiff" ); let image = crate::open(path).unwrap(); let image_blurred_gauss = image.blur(50.0).to_rgb8(); let image_blurred_gauss_samples = image_blurred_gauss.as_flat_samples(); let image_blurred_gauss_bytes = image_blurred_gauss_samples.as_slice(); let image_blurred_fast = image.fast_blur(50.0).to_rgb8(); let image_blurred_fast_samples = image_blurred_fast.as_flat_samples(); let image_blurred_fast_bytes = image_blurred_fast_samples.as_slice(); let error = image_blurred_gauss_bytes .iter() .zip(image_blurred_fast_bytes.iter()) .map(|(a, b)| ((*a as f32 - *b as f32) / (*a as f32))) .sum::() / (image_blurred_gauss_bytes.len() as f32); assert!(error < 0.05); } } image-0.25.5/src/imageops/sample.rs000064400000000000000000001246021046102023000152260ustar 00000000000000//! Functions and filters for the sampling of pixels. // See http://cs.brown.edu/courses/cs123/lectures/08_Image_Processing_IV.pdf // for some of the theory behind image scaling and convolution use std::f32; use num_traits::{NumCast, ToPrimitive, Zero}; use crate::image::{GenericImage, GenericImageView}; use crate::traits::{Enlargeable, Pixel, Primitive}; use crate::utils::clamp; use crate::{ImageBuffer, Rgba32FImage}; /// Available Sampling Filters. /// /// ## Examples /// /// To test the different sampling filters on a real example, you can find two /// examples called /// [`scaledown`](https://github.com/image-rs/image/tree/master/examples/scaledown) /// and /// [`scaleup`](https://github.com/image-rs/image/tree/master/examples/scaleup) /// in the `examples` directory of the crate source code. /// /// Here is a 3.58 MiB /// [test image](https://github.com/image-rs/image/blob/master/examples/scaledown/test.jpg) /// that has been scaled down to 300x225 px: /// /// ///

///
///
/// Nearest Neighbor ///
///
///
/// Linear: Triangle ///
///
///
/// Cubic: Catmull-Rom ///
///
///
/// Gaussian ///
///
///
/// Lanczos with window 3 ///
///
/// /// ## Speed /// /// Time required to create each of the examples above, tested on an Intel /// i7-4770 CPU with Rust 1.37 in release mode: /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// /// ///
Nearest31 ms
Triangle414 ms
CatmullRom817 ms
Gaussian1180 ms
Lanczos31170 ms
#[derive(Clone, Copy, Debug, PartialEq, Hash)] pub enum FilterType { /// Nearest Neighbor Nearest, /// Linear Filter Triangle, /// Cubic Filter CatmullRom, /// Gaussian Filter Gaussian, /// Lanczos with window 3 Lanczos3, } /// A Representation of a separable filter. pub(crate) struct Filter<'a> { /// The filter's filter function. pub(crate) kernel: Box f32 + 'a>, /// The window on which this filter operates. pub(crate) support: f32, } struct FloatNearest(f32); // to_i64, to_u64, and to_f64 implicitly affect all other lower conversions. // Note that to_f64 by default calls to_i64 and thus needs to be overridden. impl ToPrimitive for FloatNearest { // to_{i,u}64 is required, to_{i,u}{8,16} are useful. // If a usecase for full 32 bits is found its trivial to add fn to_i8(&self) -> Option { self.0.round().to_i8() } fn to_i16(&self) -> Option { self.0.round().to_i16() } fn to_i64(&self) -> Option { self.0.round().to_i64() } fn to_u8(&self) -> Option { self.0.round().to_u8() } fn to_u16(&self) -> Option { self.0.round().to_u16() } fn to_u64(&self) -> Option { self.0.round().to_u64() } fn to_f64(&self) -> Option { self.0.to_f64() } } // sinc function: the ideal sampling filter. fn sinc(t: f32) -> f32 { let a = t * f32::consts::PI; if t == 0.0 { 1.0 } else { a.sin() / a } } // lanczos kernel function. A windowed sinc function. fn lanczos(x: f32, t: f32) -> f32 { if x.abs() < t { sinc(x) * sinc(x / t) } else { 0.0 } } // Calculate a splice based on the b and c parameters. // from authors Mitchell and Netravali. fn bc_cubic_spline(x: f32, b: f32, c: f32) -> f32 { let a = x.abs(); let k = if a < 1.0 { (12.0 - 9.0 * b - 6.0 * c) * a.powi(3) + (-18.0 + 12.0 * b + 6.0 * c) * a.powi(2) + (6.0 - 2.0 * b) } else if a < 2.0 { (-b - 6.0 * c) * a.powi(3) + (6.0 * b + 30.0 * c) * a.powi(2) + (-12.0 * b - 48.0 * c) * a + (8.0 * b + 24.0 * c) } else { 0.0 }; k / 6.0 } /// The Gaussian Function. /// ```r``` is the standard deviation. pub(crate) fn gaussian(x: f32, r: f32) -> f32 { ((2.0 * f32::consts::PI).sqrt() * r).recip() * (-x.powi(2) / (2.0 * r.powi(2))).exp() } /// Calculate the lanczos kernel with a window of 3 pub(crate) fn lanczos3_kernel(x: f32) -> f32 { lanczos(x, 3.0) } /// Calculate the gaussian function with a /// standard deviation of 0.5 pub(crate) fn gaussian_kernel(x: f32) -> f32 { gaussian(x, 0.5) } /// Calculate the Catmull-Rom cubic spline. /// Also known as a form of `BiCubic` sampling in two dimensions. pub(crate) fn catmullrom_kernel(x: f32) -> f32 { bc_cubic_spline(x, 0.0, 0.5) } /// Calculate the triangle function. /// Also known as `BiLinear` sampling in two dimensions. pub(crate) fn triangle_kernel(x: f32) -> f32 { if x.abs() < 1.0 { 1.0 - x.abs() } else { 0.0 } } /// Calculate the box kernel. /// Only pixels inside the box should be considered, and those /// contribute equally. So this method simply returns 1. pub(crate) fn box_kernel(_x: f32) -> f32 { 1.0 } // Sample the rows of the supplied image using the provided filter. // The height of the image remains unchanged. // ```new_width``` is the desired width of the new image // ```filter``` is the filter to use for sampling. // ```image``` is not necessarily Rgba and the order of channels is passed through. // // Note: if an empty image is passed in, panics unless the image is truly empty. fn horizontal_sample( image: &Rgba32FImage, new_width: u32, filter: &mut Filter, ) -> ImageBuffer> where P: Pixel + 'static, S: Primitive + 'static, { let (width, height) = image.dimensions(); // This is protection against a memory usage similar to #2340. See `vertical_sample`. assert!( // Checks the implication: (width == 0) -> (height == 0) width != 0 || height == 0, "Unexpected prior allocation size. This case should have been handled by the caller" ); let mut out = ImageBuffer::new(new_width, height); let mut ws = Vec::new(); let max: f32 = NumCast::from(S::DEFAULT_MAX_VALUE).unwrap(); let min: f32 = NumCast::from(S::DEFAULT_MIN_VALUE).unwrap(); let ratio = width as f32 / new_width as f32; let sratio = if ratio < 1.0 { 1.0 } else { ratio }; let src_support = filter.support * sratio; for outx in 0..new_width { // Find the point in the input image corresponding to the centre // of the current pixel in the output image. let inputx = (outx as f32 + 0.5) * ratio; // Left and right are slice bounds for the input pixels relevant // to the output pixel we are calculating. Pixel x is relevant // if and only if (x >= left) && (x < right). // Invariant: 0 <= left < right <= width let left = (inputx - src_support).floor() as i64; let left = clamp(left, 0, >::from(width) - 1) as u32; let right = (inputx + src_support).ceil() as i64; let right = clamp( right, >::from(left) + 1, >::from(width), ) as u32; // Go back to left boundary of pixel, to properly compare with i // below, as the kernel treats the centre of a pixel as 0. let inputx = inputx - 0.5; ws.clear(); let mut sum = 0.0; for i in left..right { let w = (filter.kernel)((i as f32 - inputx) / sratio); ws.push(w); sum += w; } ws.iter_mut().for_each(|w| *w /= sum); for y in 0..height { let mut t = (0.0, 0.0, 0.0, 0.0); for (i, w) in ws.iter().enumerate() { let p = image.get_pixel(left + i as u32, y); #[allow(deprecated)] let vec = p.channels4(); t.0 += vec.0 * w; t.1 += vec.1 * w; t.2 += vec.2 * w; t.3 += vec.3 * w; } #[allow(deprecated)] let t = Pixel::from_channels( NumCast::from(FloatNearest(clamp(t.0, min, max))).unwrap(), NumCast::from(FloatNearest(clamp(t.1, min, max))).unwrap(), NumCast::from(FloatNearest(clamp(t.2, min, max))).unwrap(), NumCast::from(FloatNearest(clamp(t.3, min, max))).unwrap(), ); out.put_pixel(outx, y, t); } } out } /// Linearly sample from an image using coordinates in [0, 1]. pub fn sample_bilinear( img: &impl GenericImageView, u: f32, v: f32, ) -> Option

{ if ![u, v].iter().all(|c| (0.0..=1.0).contains(c)) { return None; } let (w, h) = img.dimensions(); if w == 0 || h == 0 { return None; } let ui = w as f32 * u - 0.5; let vi = h as f32 * v - 0.5; interpolate_bilinear( img, ui.max(0.).min((w - 1) as f32), vi.max(0.).min((h - 1) as f32), ) } /// Sample from an image using coordinates in [0, 1], taking the nearest coordinate. pub fn sample_nearest( img: &impl GenericImageView, u: f32, v: f32, ) -> Option

{ if ![u, v].iter().all(|c| (0.0..=1.0).contains(c)) { return None; } let (w, h) = img.dimensions(); let ui = w as f32 * u - 0.5; let ui = ui.max(0.).min((w.saturating_sub(1)) as f32); let vi = h as f32 * v - 0.5; let vi = vi.max(0.).min((h.saturating_sub(1)) as f32); interpolate_nearest(img, ui, vi) } /// Sample from an image using coordinates in [0, w-1] and [0, h-1], taking the /// nearest pixel. /// /// Coordinates outside the image bounds will return `None`, however the /// behavior for points within half a pixel of the image bounds may change in /// the future. pub fn interpolate_nearest( img: &impl GenericImageView, x: f32, y: f32, ) -> Option

{ let (w, h) = img.dimensions(); if w == 0 || h == 0 { return None; } if !(0.0..=((w - 1) as f32)).contains(&x) { return None; } if !(0.0..=((h - 1) as f32)).contains(&y) { return None; } Some(img.get_pixel(x.round() as u32, y.round() as u32)) } /// Linearly sample from an image using coordinates in [0, w-1] and [0, h-1]. pub fn interpolate_bilinear( img: &impl GenericImageView, x: f32, y: f32, ) -> Option

{ // assumption needed for correctness of pixel creation assert!(P::CHANNEL_COUNT <= 4); let (w, h) = img.dimensions(); if w == 0 || h == 0 { return None; } if !(0.0..=((w - 1) as f32)).contains(&x) { return None; } if !(0.0..=((h - 1) as f32)).contains(&y) { return None; } // keep these as integers, for fewer FLOPs let uf = x.floor() as u32; let vf = y.floor() as u32; let uc = (uf + 1).min(w - 1); let vc = (vf + 1).min(h - 1); // clamp coords to the range of the image let mut sxx = [[0.; 4]; 4]; // do not use Array::map, as it can be slow with high stack usage, // for [[f32; 4]; 4]. // convert samples to f32 // currently rgba is the largest one, // so just store as many items as necessary, // because there's not a simple way to be generic over all of them. let mut compute = |u: u32, v: u32, i| { let s = img.get_pixel(u, v); for (j, c) in s.channels().iter().enumerate() { sxx[j][i] = c.to_f32().unwrap(); } s }; // hacky reuse since cannot construct a generic Pixel let mut out: P = compute(uf, vf, 0); compute(uf, vc, 1); compute(uc, vf, 2); compute(uc, vc, 3); // weights, the later two are independent from the first 2 for better vectorization. let ufw = x - uf as f32; let vfw = y - vf as f32; let ucw = (uf + 1) as f32 - x; let vcw = (vf + 1) as f32 - y; // https://en.wikipedia.org/wiki/Bilinear_interpolation#Weighted_mean // the distance between pixels is 1 so there is no denominator let wff = ucw * vcw; let wfc = ucw * vfw; let wcf = ufw * vcw; let wcc = ufw * vfw; // was originally assert, but is actually not a cheap computation debug_assert!(f32::abs((wff + wfc + wcf + wcc) - 1.) < 1e-3); // hack to see if primitive is an integer or a float let is_float = P::Subpixel::DEFAULT_MAX_VALUE.to_f32().unwrap() == 1.0; for (i, c) in out.channels_mut().iter_mut().enumerate() { let v = wff * sxx[i][0] + wfc * sxx[i][1] + wcf * sxx[i][2] + wcc * sxx[i][3]; // this rounding may introduce quantization errors, // Specifically what is meant is that many samples may deviate // from the mean value of the originals, but it's not possible to fix that. *c = ::from(if is_float { v } else { v.round() }).unwrap_or({ if v < 0.0 { P::Subpixel::DEFAULT_MIN_VALUE } else { P::Subpixel::DEFAULT_MAX_VALUE } }); } Some(out) } // Sample the columns of the supplied image using the provided filter. // The width of the image remains unchanged. // ```new_height``` is the desired height of the new image // ```filter``` is the filter to use for sampling. // The return value is not necessarily Rgba, the underlying order of channels in ```image``` is // preserved. // // Note: if an empty image is passed in, panics unless the image is truly empty. fn vertical_sample(image: &I, new_height: u32, filter: &mut Filter) -> Rgba32FImage where I: GenericImageView, P: Pixel + 'static, S: Primitive + 'static, { let (width, height) = image.dimensions(); // This is protection against a regression in memory usage such as #2340. Since the strategy to // deal with it depends on the caller it is a precondition of this function. assert!( // Checks the implication: (height == 0) -> (width == 0) height != 0 || width == 0, "Unexpected prior allocation size. This case should have been handled by the caller" ); let mut out = ImageBuffer::new(width, new_height); let mut ws = Vec::new(); let ratio = height as f32 / new_height as f32; let sratio = if ratio < 1.0 { 1.0 } else { ratio }; let src_support = filter.support * sratio; for outy in 0..new_height { // For an explanation of this algorithm, see the comments // in horizontal_sample. let inputy = (outy as f32 + 0.5) * ratio; let left = (inputy - src_support).floor() as i64; let left = clamp(left, 0, >::from(height) - 1) as u32; let right = (inputy + src_support).ceil() as i64; let right = clamp( right, >::from(left) + 1, >::from(height), ) as u32; let inputy = inputy - 0.5; ws.clear(); let mut sum = 0.0; for i in left..right { let w = (filter.kernel)((i as f32 - inputy) / sratio); ws.push(w); sum += w; } ws.iter_mut().for_each(|w| *w /= sum); for x in 0..width { let mut t = (0.0, 0.0, 0.0, 0.0); for (i, w) in ws.iter().enumerate() { let p = image.get_pixel(x, left + i as u32); #[allow(deprecated)] let (k1, k2, k3, k4) = p.channels4(); let vec: (f32, f32, f32, f32) = ( NumCast::from(k1).unwrap(), NumCast::from(k2).unwrap(), NumCast::from(k3).unwrap(), NumCast::from(k4).unwrap(), ); t.0 += vec.0 * w; t.1 += vec.1 * w; t.2 += vec.2 * w; t.3 += vec.3 * w; } #[allow(deprecated)] // This is not necessarily Rgba. let t = Pixel::from_channels(t.0, t.1, t.2, t.3); out.put_pixel(x, outy, t); } } out } /// Local struct for keeping track of pixel sums for fast thumbnail averaging struct ThumbnailSum(S::Larger, S::Larger, S::Larger, S::Larger); impl ThumbnailSum { fn zeroed() -> Self { ThumbnailSum( S::Larger::zero(), S::Larger::zero(), S::Larger::zero(), S::Larger::zero(), ) } fn sample_val(val: S) -> S::Larger { ::from(val).unwrap() } fn add_pixel>(&mut self, pixel: P) { #[allow(deprecated)] let pixel = pixel.channels4(); self.0 += Self::sample_val(pixel.0); self.1 += Self::sample_val(pixel.1); self.2 += Self::sample_val(pixel.2); self.3 += Self::sample_val(pixel.3); } } /// Resize the supplied image to the specific dimensions. /// /// For downscaling, this method uses a fast integer algorithm where each source pixel contributes /// to exactly one target pixel. May give aliasing artifacts if new size is close to old size. /// /// In case the current width is smaller than the new width or similar for the height, another /// strategy is used instead. For each pixel in the output, a rectangular region of the input is /// determined, just as previously. But when no input pixel is part of this region, the nearest /// pixels are interpolated instead. /// /// For speed reasons, all interpolation is performed linearly over the colour values. It will not /// take the pixel colour spaces into account. pub fn thumbnail(image: &I, new_width: u32, new_height: u32) -> ImageBuffer> where I: GenericImageView, P: Pixel + 'static, S: Primitive + Enlargeable + 'static, { let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(new_width, new_height); if height == 0 || width == 0 { return out; } let x_ratio = width as f32 / new_width as f32; let y_ratio = height as f32 / new_height as f32; for outy in 0..new_height { let bottomf = outy as f32 * y_ratio; let topf = bottomf + y_ratio; let bottom = clamp(bottomf.ceil() as u32, 0, height - 1); let top = clamp(topf.ceil() as u32, bottom, height); for outx in 0..new_width { let leftf = outx as f32 * x_ratio; let rightf = leftf + x_ratio; let left = clamp(leftf.ceil() as u32, 0, width - 1); let right = clamp(rightf.ceil() as u32, left, width); let avg = if bottom != top && left != right { thumbnail_sample_block(image, left, right, bottom, top) } else if bottom != top { // && left == right // In the first column we have left == 0 and right > ceil(y_scale) > 0 so this // assertion can never trigger. debug_assert!( left > 0 && right > 0, "First output column must have corresponding pixels" ); let fraction_horizontal = (leftf.fract() + rightf.fract()) / 2.; thumbnail_sample_fraction_horizontal( image, right - 1, fraction_horizontal, bottom, top, ) } else if left != right { // && bottom == top // In the first line we have bottom == 0 and top > ceil(x_scale) > 0 so this // assertion can never trigger. debug_assert!( bottom > 0 && top > 0, "First output row must have corresponding pixels" ); let fraction_vertical = (topf.fract() + bottomf.fract()) / 2.; thumbnail_sample_fraction_vertical(image, left, right, top - 1, fraction_vertical) } else { // bottom == top && left == right let fraction_horizontal = (topf.fract() + bottomf.fract()) / 2.; let fraction_vertical = (leftf.fract() + rightf.fract()) / 2.; thumbnail_sample_fraction_both( image, right - 1, fraction_horizontal, top - 1, fraction_vertical, ) }; #[allow(deprecated)] let pixel = Pixel::from_channels(avg.0, avg.1, avg.2, avg.3); out.put_pixel(outx, outy, pixel); } } out } /// Get a pixel for a thumbnail where the input window encloses at least a full pixel. fn thumbnail_sample_block( image: &I, left: u32, right: u32, bottom: u32, top: u32, ) -> (S, S, S, S) where I: GenericImageView, P: Pixel, S: Primitive + Enlargeable, { let mut sum = ThumbnailSum::zeroed(); for y in bottom..top { for x in left..right { let k = image.get_pixel(x, y); sum.add_pixel(k); } } let n = ::from((right - left) * (top - bottom)).unwrap(); let round = ::from(n / NumCast::from(2).unwrap()).unwrap(); ( S::clamp_from((sum.0 + round) / n), S::clamp_from((sum.1 + round) / n), S::clamp_from((sum.2 + round) / n), S::clamp_from((sum.3 + round) / n), ) } /// Get a thumbnail pixel where the input window encloses at least a vertical pixel. fn thumbnail_sample_fraction_horizontal( image: &I, left: u32, fraction_horizontal: f32, bottom: u32, top: u32, ) -> (S, S, S, S) where I: GenericImageView, P: Pixel, S: Primitive + Enlargeable, { let fract = fraction_horizontal; let mut sum_left = ThumbnailSum::zeroed(); let mut sum_right = ThumbnailSum::zeroed(); for x in bottom..top { let k_left = image.get_pixel(left, x); sum_left.add_pixel(k_left); let k_right = image.get_pixel(left + 1, x); sum_right.add_pixel(k_right); } // Now we approximate: left/n*(1-fract) + right/n*fract let fact_right = fract / ((top - bottom) as f32); let fact_left = (1. - fract) / ((top - bottom) as f32); let mix_left_and_right = |leftv: S::Larger, rightv: S::Larger| { ::from( fact_left * leftv.to_f32().unwrap() + fact_right * rightv.to_f32().unwrap(), ) .expect("Average sample value should fit into sample type") }; ( mix_left_and_right(sum_left.0, sum_right.0), mix_left_and_right(sum_left.1, sum_right.1), mix_left_and_right(sum_left.2, sum_right.2), mix_left_and_right(sum_left.3, sum_right.3), ) } /// Get a thumbnail pixel where the input window encloses at least a horizontal pixel. fn thumbnail_sample_fraction_vertical( image: &I, left: u32, right: u32, bottom: u32, fraction_vertical: f32, ) -> (S, S, S, S) where I: GenericImageView, P: Pixel, S: Primitive + Enlargeable, { let fract = fraction_vertical; let mut sum_bot = ThumbnailSum::zeroed(); let mut sum_top = ThumbnailSum::zeroed(); for x in left..right { let k_bot = image.get_pixel(x, bottom); sum_bot.add_pixel(k_bot); let k_top = image.get_pixel(x, bottom + 1); sum_top.add_pixel(k_top); } // Now we approximate: bot/n*fract + top/n*(1-fract) let fact_top = fract / ((right - left) as f32); let fact_bot = (1. - fract) / ((right - left) as f32); let mix_bot_and_top = |botv: S::Larger, topv: S::Larger| { ::from(fact_bot * botv.to_f32().unwrap() + fact_top * topv.to_f32().unwrap()) .expect("Average sample value should fit into sample type") }; ( mix_bot_and_top(sum_bot.0, sum_top.0), mix_bot_and_top(sum_bot.1, sum_top.1), mix_bot_and_top(sum_bot.2, sum_top.2), mix_bot_and_top(sum_bot.3, sum_top.3), ) } /// Get a single pixel for a thumbnail where the input window does not enclose any full pixel. fn thumbnail_sample_fraction_both( image: &I, left: u32, fraction_vertical: f32, bottom: u32, fraction_horizontal: f32, ) -> (S, S, S, S) where I: GenericImageView, P: Pixel, S: Primitive + Enlargeable, { #[allow(deprecated)] let k_bl = image.get_pixel(left, bottom).channels4(); #[allow(deprecated)] let k_tl = image.get_pixel(left, bottom + 1).channels4(); #[allow(deprecated)] let k_br = image.get_pixel(left + 1, bottom).channels4(); #[allow(deprecated)] let k_tr = image.get_pixel(left + 1, bottom + 1).channels4(); let frac_v = fraction_vertical; let frac_h = fraction_horizontal; let fact_tr = frac_v * frac_h; let fact_tl = frac_v * (1. - frac_h); let fact_br = (1. - frac_v) * frac_h; let fact_bl = (1. - frac_v) * (1. - frac_h); let mix = |br: S, tr: S, bl: S, tl: S| { ::from( fact_br * br.to_f32().unwrap() + fact_tr * tr.to_f32().unwrap() + fact_bl * bl.to_f32().unwrap() + fact_tl * tl.to_f32().unwrap(), ) .expect("Average sample value should fit into sample type") }; ( mix(k_br.0, k_tr.0, k_bl.0, k_tl.0), mix(k_br.1, k_tr.1, k_bl.1, k_tl.1), mix(k_br.2, k_tr.2, k_bl.2, k_tl.2), mix(k_br.3, k_tr.3, k_bl.3, k_tl.3), ) } /// Perform a 3x3 box filter on the supplied image. /// ```kernel``` is an array of the filter weights of length 9. pub fn filter3x3(image: &I, kernel: &[f32]) -> ImageBuffer> where I: GenericImageView, P: Pixel + 'static, S: Primitive + 'static, { // The kernel's input positions relative to the current pixel. let taps: &[(isize, isize)] = &[ (-1, -1), (0, -1), (1, -1), (-1, 0), (0, 0), (1, 0), (-1, 1), (0, 1), (1, 1), ]; let (width, height) = image.dimensions(); let mut out = ImageBuffer::new(width, height); let max = S::DEFAULT_MAX_VALUE; let max: f32 = NumCast::from(max).unwrap(); let sum = match kernel.iter().fold(0.0, |s, &item| s + item) { x if x == 0.0 => 1.0, sum => sum, }; let sum = (sum, sum, sum, sum); for y in 1..height - 1 { for x in 1..width - 1 { let mut t = (0.0, 0.0, 0.0, 0.0); // TODO: There is no need to recalculate the kernel for each pixel. // Only a subtract and addition is needed for pixels after the first // in each row. for (&k, &(a, b)) in kernel.iter().zip(taps.iter()) { let k = (k, k, k, k); let x0 = x as isize + a; let y0 = y as isize + b; let p = image.get_pixel(x0 as u32, y0 as u32); #[allow(deprecated)] let (k1, k2, k3, k4) = p.channels4(); let vec: (f32, f32, f32, f32) = ( NumCast::from(k1).unwrap(), NumCast::from(k2).unwrap(), NumCast::from(k3).unwrap(), NumCast::from(k4).unwrap(), ); t.0 += vec.0 * k.0; t.1 += vec.1 * k.1; t.2 += vec.2 * k.2; t.3 += vec.3 * k.3; } let (t1, t2, t3, t4) = (t.0 / sum.0, t.1 / sum.1, t.2 / sum.2, t.3 / sum.3); #[allow(deprecated)] let t = Pixel::from_channels( NumCast::from(clamp(t1, 0.0, max)).unwrap(), NumCast::from(clamp(t2, 0.0, max)).unwrap(), NumCast::from(clamp(t3, 0.0, max)).unwrap(), NumCast::from(clamp(t4, 0.0, max)).unwrap(), ); out.put_pixel(x, y, t); } } out } /// Resize the supplied image to the specified dimensions. /// ```nwidth``` and ```nheight``` are the new dimensions. /// ```filter``` is the sampling filter to use. pub fn resize( image: &I, nwidth: u32, nheight: u32, filter: FilterType, ) -> ImageBuffer::Subpixel>> where I::Pixel: 'static, ::Subpixel: 'static, { // Check if there is nothing to sample from. let is_empty = { let (width, height) = image.dimensions(); width == 0 || height == 0 }; if is_empty { return ImageBuffer::new(nwidth, nheight); } // check if the new dimensions are the same as the old. if they are, make a copy instead of resampling if (nwidth, nheight) == image.dimensions() { let mut tmp = ImageBuffer::new(image.width(), image.height()); tmp.copy_from(image, 0, 0).unwrap(); return tmp; } let mut method = match filter { FilterType::Nearest => Filter { kernel: Box::new(box_kernel), support: 0.0, }, FilterType::Triangle => Filter { kernel: Box::new(triangle_kernel), support: 1.0, }, FilterType::CatmullRom => Filter { kernel: Box::new(catmullrom_kernel), support: 2.0, }, FilterType::Gaussian => Filter { kernel: Box::new(gaussian_kernel), support: 3.0, }, FilterType::Lanczos3 => Filter { kernel: Box::new(lanczos3_kernel), support: 3.0, }, }; // Note: tmp is not necessarily actually Rgba let tmp: Rgba32FImage = vertical_sample(image, nheight, &mut method); horizontal_sample(&tmp, nwidth, &mut method) } /// Performs a Gaussian blur on the supplied image. /// ```sigma``` is a measure of how much to blur by. /// Use [crate::imageops::fast_blur()] for a faster but less /// accurate version. pub fn blur( image: &I, sigma: f32, ) -> ImageBuffer::Subpixel>> where I::Pixel: 'static, { let sigma = if sigma <= 0.0 { 1.0 } else { sigma }; let mut method = Filter { kernel: Box::new(|x| gaussian(x, sigma)), support: 2.0 * sigma, }; let (width, height) = image.dimensions(); let is_empty = width == 0 || height == 0; if is_empty { return ImageBuffer::new(width, height); } // Keep width and height the same for horizontal and // vertical sampling. // Note: tmp is not necessarily actually Rgba let tmp: Rgba32FImage = vertical_sample(image, height, &mut method); horizontal_sample(&tmp, width, &mut method) } /// Performs an unsharpen mask on the supplied image. /// ```sigma``` is the amount to blur the image by. /// ```threshold``` is the threshold for minimal brightness change that will be sharpened. /// /// See pub fn unsharpen(image: &I, sigma: f32, threshold: i32) -> ImageBuffer> where I: GenericImageView, P: Pixel + 'static, S: Primitive + 'static, { let mut tmp = blur(image, sigma); let max = S::DEFAULT_MAX_VALUE; let max: i32 = NumCast::from(max).unwrap(); let (width, height) = image.dimensions(); for y in 0..height { for x in 0..width { let a = image.get_pixel(x, y); let b = tmp.get_pixel_mut(x, y); let p = a.map2(b, |c, d| { let ic: i32 = NumCast::from(c).unwrap(); let id: i32 = NumCast::from(d).unwrap(); let diff = ic - id; if diff.abs() > threshold { let e = clamp(ic + diff, 0, max); // FIXME what does this do for f32? clamp 0-1 integers?? NumCast::from(e).unwrap() } else { c } }); *b = p; } } tmp } #[cfg(test)] mod tests { use super::{resize, sample_bilinear, sample_nearest, FilterType}; use crate::{GenericImageView, ImageBuffer, RgbImage}; #[cfg(feature = "benchmarks")] use test; #[bench] #[cfg(all(feature = "benchmarks", feature = "png"))] fn bench_resize(b: &mut test::Bencher) { use std::path::Path; let img = crate::open(Path::new("./examples/fractal.png")).unwrap(); b.iter(|| { test::black_box(resize(&img, 200, 200, FilterType::Nearest)); }); b.bytes = 800 * 800 * 3 + 200 * 200 * 3; } #[test] #[cfg(feature = "png")] fn test_resize_same_size() { use std::path::Path; let img = crate::open(Path::new("./examples/fractal.png")).unwrap(); let resize = img.resize(img.width(), img.height(), FilterType::Triangle); assert!(img.pixels().eq(resize.pixels())) } #[test] #[cfg(feature = "png")] fn test_sample_bilinear() { use std::path::Path; let img = crate::open(Path::new("./examples/fractal.png")).unwrap(); assert!(sample_bilinear(&img, 0., 0.).is_some()); assert!(sample_bilinear(&img, 1., 0.).is_some()); assert!(sample_bilinear(&img, 0., 1.).is_some()); assert!(sample_bilinear(&img, 1., 1.).is_some()); assert!(sample_bilinear(&img, 0.5, 0.5).is_some()); assert!(sample_bilinear(&img, 1.2, 0.5).is_none()); assert!(sample_bilinear(&img, 0.5, 1.2).is_none()); assert!(sample_bilinear(&img, 1.2, 1.2).is_none()); assert!(sample_bilinear(&img, -0.1, 0.2).is_none()); assert!(sample_bilinear(&img, 0.2, -0.1).is_none()); assert!(sample_bilinear(&img, -0.1, -0.1).is_none()); } #[test] #[cfg(feature = "png")] fn test_sample_nearest() { use std::path::Path; let img = crate::open(Path::new("./examples/fractal.png")).unwrap(); assert!(sample_nearest(&img, 0., 0.).is_some()); assert!(sample_nearest(&img, 1., 0.).is_some()); assert!(sample_nearest(&img, 0., 1.).is_some()); assert!(sample_nearest(&img, 1., 1.).is_some()); assert!(sample_nearest(&img, 0.5, 0.5).is_some()); assert!(sample_nearest(&img, 1.2, 0.5).is_none()); assert!(sample_nearest(&img, 0.5, 1.2).is_none()); assert!(sample_nearest(&img, 1.2, 1.2).is_none()); assert!(sample_nearest(&img, -0.1, 0.2).is_none()); assert!(sample_nearest(&img, 0.2, -0.1).is_none()); assert!(sample_nearest(&img, -0.1, -0.1).is_none()); } #[test] fn test_sample_bilinear_correctness() { use crate::Rgba; let img = ImageBuffer::from_fn(2, 2, |x, y| match (x, y) { (0, 0) => Rgba([255, 0, 0, 0]), (0, 1) => Rgba([0, 255, 0, 0]), (1, 0) => Rgba([0, 0, 255, 0]), (1, 1) => Rgba([0, 0, 0, 255]), _ => panic!(), }); assert_eq!(sample_bilinear(&img, 0.5, 0.5), Some(Rgba([64; 4]))); assert_eq!(sample_bilinear(&img, 0.0, 0.0), Some(Rgba([255, 0, 0, 0]))); assert_eq!(sample_bilinear(&img, 0.0, 1.0), Some(Rgba([0, 255, 0, 0]))); assert_eq!(sample_bilinear(&img, 1.0, 0.0), Some(Rgba([0, 0, 255, 0]))); assert_eq!(sample_bilinear(&img, 1.0, 1.0), Some(Rgba([0, 0, 0, 255]))); assert_eq!( sample_bilinear(&img, 0.5, 0.0), Some(Rgba([128, 0, 128, 0])) ); assert_eq!( sample_bilinear(&img, 0.0, 0.5), Some(Rgba([128, 128, 0, 0])) ); assert_eq!( sample_bilinear(&img, 0.5, 1.0), Some(Rgba([0, 128, 0, 128])) ); assert_eq!( sample_bilinear(&img, 1.0, 0.5), Some(Rgba([0, 0, 128, 128])) ); } #[bench] #[cfg(feature = "benchmarks")] fn bench_sample_bilinear(b: &mut test::Bencher) { use crate::Rgba; let img = ImageBuffer::from_fn(2, 2, |x, y| match (x, y) { (0, 0) => Rgba([255, 0, 0, 0]), (0, 1) => Rgba([0, 255, 0, 0]), (1, 0) => Rgba([0, 0, 255, 0]), (1, 1) => Rgba([0, 0, 0, 255]), _ => panic!(), }); b.iter(|| { sample_bilinear(&img, test::black_box(0.5), test::black_box(0.5)); }); } #[test] fn test_sample_nearest_correctness() { use crate::Rgba; let img = ImageBuffer::from_fn(2, 2, |x, y| match (x, y) { (0, 0) => Rgba([255, 0, 0, 0]), (0, 1) => Rgba([0, 255, 0, 0]), (1, 0) => Rgba([0, 0, 255, 0]), (1, 1) => Rgba([0, 0, 0, 255]), _ => panic!(), }); assert_eq!(sample_nearest(&img, 0.0, 0.0), Some(Rgba([255, 0, 0, 0]))); assert_eq!(sample_nearest(&img, 0.0, 1.0), Some(Rgba([0, 255, 0, 0]))); assert_eq!(sample_nearest(&img, 1.0, 0.0), Some(Rgba([0, 0, 255, 0]))); assert_eq!(sample_nearest(&img, 1.0, 1.0), Some(Rgba([0, 0, 0, 255]))); assert_eq!(sample_nearest(&img, 0.5, 0.5), Some(Rgba([0, 0, 0, 255]))); assert_eq!(sample_nearest(&img, 0.5, 0.0), Some(Rgba([0, 0, 255, 0]))); assert_eq!(sample_nearest(&img, 0.0, 0.5), Some(Rgba([0, 255, 0, 0]))); assert_eq!(sample_nearest(&img, 0.5, 1.0), Some(Rgba([0, 0, 0, 255]))); assert_eq!(sample_nearest(&img, 1.0, 0.5), Some(Rgba([0, 0, 0, 255]))); } #[bench] #[cfg(all(feature = "benchmarks", feature = "tiff"))] fn bench_resize_same_size(b: &mut test::Bencher) { let path = concat!( env!("CARGO_MANIFEST_DIR"), "/tests/images/tiff/testsuite/mandrill.tiff" ); let image = crate::open(path).unwrap(); b.iter(|| { test::black_box(image.resize(image.width(), image.height(), FilterType::CatmullRom)); }); b.bytes = (image.width() * image.height() * 3) as u64; } #[test] fn test_issue_186() { let img: RgbImage = ImageBuffer::new(100, 100); let _ = resize(&img, 50, 50, FilterType::Lanczos3); } #[bench] #[cfg(all(feature = "benchmarks", feature = "tiff"))] fn bench_thumbnail(b: &mut test::Bencher) { let path = concat!( env!("CARGO_MANIFEST_DIR"), "/tests/images/tiff/testsuite/mandrill.tiff" ); let image = crate::open(path).unwrap(); b.iter(|| { test::black_box(image.thumbnail(256, 256)); }); b.bytes = 512 * 512 * 4 + 256 * 256 * 4; } #[bench] #[cfg(all(feature = "benchmarks", feature = "tiff"))] fn bench_thumbnail_upsize(b: &mut test::Bencher) { let path = concat!( env!("CARGO_MANIFEST_DIR"), "/tests/images/tiff/testsuite/mandrill.tiff" ); let image = crate::open(path).unwrap().thumbnail(256, 256); b.iter(|| { test::black_box(image.thumbnail(512, 512)); }); b.bytes = 512 * 512 * 4 + 256 * 256 * 4; } #[bench] #[cfg(all(feature = "benchmarks", feature = "tiff"))] fn bench_thumbnail_upsize_irregular(b: &mut test::Bencher) { let path = concat!( env!("CARGO_MANIFEST_DIR"), "/tests/images/tiff/testsuite/mandrill.tiff" ); let image = crate::open(path).unwrap().thumbnail(193, 193); b.iter(|| { test::black_box(image.thumbnail(256, 256)); }); b.bytes = 193 * 193 * 4 + 256 * 256 * 4; } #[test] #[cfg(feature = "png")] fn resize_transparent_image() { use super::FilterType::{CatmullRom, Gaussian, Lanczos3, Nearest, Triangle}; use crate::imageops::crop_imm; use crate::RgbaImage; fn assert_resize(image: &RgbaImage, filter: FilterType) { let resized = resize(image, 16, 16, filter); let cropped = crop_imm(&resized, 5, 5, 6, 6).to_image(); for pixel in cropped.pixels() { let alpha = pixel.0[3]; assert!( alpha != 254 && alpha != 253, "alpha value: {}, {:?}", alpha, filter ); } } let path = concat!( env!("CARGO_MANIFEST_DIR"), "/tests/images/png/transparency/tp1n3p08.png" ); let img = crate::open(path).unwrap(); let rgba8 = img.as_rgba8().unwrap(); let filters = &[Nearest, Triangle, CatmullRom, Gaussian, Lanczos3]; for filter in filters { assert_resize(rgba8, *filter); } } #[test] fn bug_1600() { let image = crate::RgbaImage::from_raw(629, 627, vec![255; 629 * 627 * 4]).unwrap(); let result = resize(&image, 22, 22, FilterType::Lanczos3); assert!(result.into_raw().into_iter().any(|c| c != 0)); } #[test] fn issue_2340() { let empty = crate::GrayImage::from_raw(1 << 31, 0, vec![]).unwrap(); // Really we're checking that no overflow / outsized allocation happens here. let result = resize(&empty, 1, 1, FilterType::Lanczos3); assert!(result.into_raw().into_iter().all(|c| c == 0)); // With the previous strategy before the regression this would allocate 1TB of memory for a // temporary during the sampling evaluation. let result = resize(&empty, 256, 256, FilterType::Lanczos3); assert!(result.into_raw().into_iter().all(|c| c == 0)); } #[test] fn issue_2340_refl() { // Tests the swapped coordinate version of `issue_2340`. let empty = crate::GrayImage::from_raw(0, 1 << 31, vec![]).unwrap(); let result = resize(&empty, 1, 1, FilterType::Lanczos3); assert!(result.into_raw().into_iter().all(|c| c == 0)); let result = resize(&empty, 256, 256, FilterType::Lanczos3); assert!(result.into_raw().into_iter().all(|c| c == 0)); } } image-0.25.5/src/lib.rs000064400000000000000000000277741046102023000127230ustar 00000000000000//! # Overview //! //! This crate provides native rust implementations of image encoding and decoding as well as some //! basic image manipulation functions. Additional documentation can currently also be found in the //! [README.md file which is most easily viewed on //! github](https://github.com/image-rs/image/blob/main/README.md). //! //! There are two core problems for which this library provides solutions: a unified interface for image //! encodings and simple generic buffers for their content. It's possible to use either feature //! without the other. The focus is on a small and stable set of common operations that can be //! supplemented by other specialized crates. The library also prefers safe solutions with few //! dependencies. //! //! # High level API //! //! Load images using [`ImageReader`](crate::image_reader::ImageReader): //! //! ```rust,no_run //! use std::io::Cursor; //! use image::ImageReader; //! # fn main() -> Result<(), image::ImageError> { //! # let bytes = vec![0u8]; //! //! let img = ImageReader::open("myimage.png")?.decode()?; //! let img2 = ImageReader::new(Cursor::new(bytes)).with_guessed_format()?.decode()?; //! # Ok(()) //! # } //! ``` //! //! And save them using [`save`] or [`write_to`] methods: //! //! ```rust,no_run //! # use std::io::{Write, Cursor}; //! # use image::{DynamicImage, ImageFormat}; //! # #[cfg(feature = "png")] //! # fn main() -> Result<(), image::ImageError> { //! # let img: DynamicImage = unimplemented!(); //! # let img2: DynamicImage = unimplemented!(); //! img.save("empty.jpg")?; //! //! let mut bytes: Vec = Vec::new(); //! img2.write_to(&mut Cursor::new(&mut bytes), image::ImageFormat::Png)?; //! # Ok(()) //! # } //! # #[cfg(not(feature = "png"))] fn main() {} //! ``` //! //! With default features, the crate includes support for [many common image formats](codecs/index.html#supported-formats). //! //! [`save`]: enum.DynamicImage.html#method.save //! [`write_to`]: enum.DynamicImage.html#method.write_to //! [`ImageReader`]: struct.Reader.html //! //! # Image buffers //! //! The two main types for storing images: //! * [`ImageBuffer`] which holds statically typed image contents. //! * [`DynamicImage`] which is an enum over the supported `ImageBuffer` formats //! and supports conversions between them. //! //! As well as a few more specialized options: //! * [`GenericImage`] trait for a mutable image buffer. //! * [`GenericImageView`] trait for read only references to a `GenericImage`. //! * [`flat`] module containing types for interoperability with generic channel //! matrices and foreign interfaces. //! //! [`GenericImageView`]: trait.GenericImageView.html //! [`GenericImage`]: trait.GenericImage.html //! [`ImageBuffer`]: struct.ImageBuffer.html //! [`DynamicImage`]: enum.DynamicImage.html //! [`flat`]: flat/index.html //! //! # Low level encoding/decoding API //! //! Implementations of [`ImageEncoder`] provides low level control over encoding: //! ```rust,no_run //! # use std::io::Write; //! # use image::DynamicImage; //! # use image::ImageEncoder; //! # #[cfg(feature = "jpeg")] //! # fn main() -> Result<(), image::ImageError> { //! # use image::codecs::jpeg::JpegEncoder; //! # let img: DynamicImage = unimplemented!(); //! # let writer: Box = unimplemented!(); //! let encoder = JpegEncoder::new_with_quality(&mut writer, 95); //! img.write_with_encoder(encoder)?; //! # Ok(()) //! # } //! # #[cfg(not(feature = "jpeg"))] fn main() {} //! ``` //! While [`ImageDecoder`] and [`ImageDecoderRect`] give access to more advanced decoding options: //! //! ```rust,no_run //! # use std::io::{BufReader, Cursor}; //! # use image::DynamicImage; //! # use image::ImageDecoder; //! # #[cfg(feature = "png")] //! # fn main() -> Result<(), image::ImageError> { //! # use image::codecs::png::PngDecoder; //! # let img: DynamicImage = unimplemented!(); //! # let reader: BufReader> = unimplemented!(); //! let decoder = PngDecoder::new(&mut reader)?; //! let icc = decoder.icc_profile(); //! let img = DynamicImage::from_decoder(decoder)?; //! # Ok(()) //! # } //! # #[cfg(not(feature = "png"))] fn main() {} //! ``` //! //! [`DynamicImage::from_decoder`]: enum.DynamicImage.html#method.from_decoder //! [`ImageDecoderRect`]: trait.ImageDecoderRect.html //! [`ImageDecoder`]: trait.ImageDecoder.html //! [`ImageEncoder`]: trait.ImageEncoder.html #![warn(missing_docs)] #![warn(unused_qualifications)] #![deny(unreachable_pub)] #![deny(deprecated)] #![deny(missing_copy_implementations)] #![cfg_attr(all(test, feature = "benchmarks"), feature(test))] #![cfg_attr(docsrs, feature(doc_auto_cfg))] // We've temporarily disabled PCX support for 0.25.5 release // by removing the corresponding feature. // We want to ship bug fixes without committing to PCX support. // // Cargo shows warnings about code depending on a nonexistent feature // even to people using the crate as a dependency, // so we have to suppress those warnings. #![allow(unexpected_cfgs)] #[cfg(all(test, feature = "benchmarks"))] extern crate test; #[cfg(test)] #[macro_use] extern crate quickcheck; pub use crate::color::{ColorType, ExtendedColorType}; pub use crate::color::{Luma, LumaA, Rgb, Rgba}; pub use crate::error::{ImageError, ImageResult}; pub use crate::image::{ AnimationDecoder, GenericImage, GenericImageView, ImageDecoder, ImageDecoderRect, ImageEncoder, ImageFormat, // Iterators Pixels, SubImage, }; pub use crate::buffer_::{ GrayAlphaImage, GrayImage, // Image types ImageBuffer, Rgb32FImage, RgbImage, Rgba32FImage, RgbaImage, }; pub use crate::flat::FlatSamples; // Traits pub use crate::traits::{EncodableLayout, Pixel, PixelWithColorType, Primitive}; // Opening and loading images pub use crate::dynimage::{ image_dimensions, load_from_memory, load_from_memory_with_format, open, save_buffer, save_buffer_with_format, write_buffer_with_format, }; pub use crate::image_reader::free_functions::{guess_format, load}; pub use crate::image_reader::{ImageReader, LimitSupport, Limits}; pub use crate::dynimage::DynamicImage; pub use crate::animation::{Delay, Frame, Frames}; // More detailed error type pub mod error; /// Iterators and other auxiliary structure for the `ImageBuffer` type. pub mod buffer { // Only those not exported at the top-level pub use crate::buffer_::{ ConvertBuffer, EnumeratePixels, EnumeratePixelsMut, EnumerateRows, EnumerateRowsMut, Pixels, PixelsMut, Rows, RowsMut, }; #[cfg(feature = "rayon")] pub use crate::buffer_par::*; } // Math utils pub mod math; // Image processing functions pub mod imageops; // Buffer representations for ffi. pub mod flat; /// Encoding and decoding for various image file formats. /// /// # Supported formats /// /// /// /// | Format | Decoding | Encoding | /// | -------- | ----------------------------------------- | --------------------------------------- | /// | AVIF | Yes \* | Yes (lossy only) | /// | BMP | Yes | Yes | /// | DDS | Yes | --- | /// | Farbfeld | Yes | Yes | /// | GIF | Yes | Yes | /// | HDR | Yes | Yes | /// | ICO | Yes | Yes | /// | JPEG | Yes | Yes | /// | EXR | Yes | Yes | /// | PNG | Yes | Yes | /// | PNM | Yes | Yes | /// | QOI | Yes | Yes | /// | TGA | Yes | Yes | /// | TIFF | Yes | Yes | /// | WebP | Yes | Yes (lossless only) | /// /// - \* Requires the `avif-native` feature, uses the libdav1d C library. /// /// ## A note on format specific features /// /// One of the main goals of `image` is stability, in runtime but also for programmers. This /// ensures that performance as well as safety fixes reach a majority of its user base with little /// effort. Re-exporting all details of its dependencies would run counter to this goal as it /// linked _all_ major version bumps between them and `image`. As such, we are wary of exposing too /// many details, or configuration options, that are not shared between different image formats. /// /// Nevertheless, the advantage of precise control is hard to ignore. We will thus consider /// _wrappers_, not direct re-exports, in either of the following cases: /// /// 1. A standard specifies that configuration _x_ is required for decoders/encoders and there /// exists an essentially canonical way to control it. /// 2. At least two different implementations agree on some (sub-)set of features in practice. /// 3. A technical argument including measurements of the performance, space benefits, or otherwise /// objectively quantified benefits can be made, and the added interface is unlikely to require /// breaking changes. /// /// Features that fulfill two or more criteria are preferred. /// /// Re-exports of dependencies that reach version `1` will be discussed when it happens. pub mod codecs { #[cfg(any(feature = "avif", feature = "avif-native"))] pub mod avif; #[cfg(feature = "bmp")] pub mod bmp; #[cfg(feature = "dds")] pub mod dds; #[cfg(feature = "ff")] pub mod farbfeld; #[cfg(feature = "gif")] pub mod gif; #[cfg(feature = "hdr")] pub mod hdr; #[cfg(feature = "ico")] pub mod ico; #[cfg(feature = "jpeg")] pub mod jpeg; #[cfg(feature = "exr")] pub mod openexr; #[cfg(feature = "pcx")] pub mod pcx; #[cfg(feature = "png")] pub mod png; #[cfg(feature = "pnm")] pub mod pnm; #[cfg(feature = "qoi")] pub mod qoi; #[cfg(feature = "tga")] pub mod tga; #[cfg(feature = "tiff")] pub mod tiff; #[cfg(feature = "webp")] pub mod webp; #[cfg(feature = "dds")] mod dxt; } mod animation; #[path = "buffer.rs"] mod buffer_; #[cfg(feature = "rayon")] mod buffer_par; mod color; mod dynimage; mod image; mod image_reader; pub mod metadata; //TODO delete this module after a few releases /// deprecated io module the original io module has been renamed to `image_reader` pub mod io { #[deprecated(note = "this type has been moved and renamed to image::ImageReader")] /// Deprecated re-export of `ImageReader` as `Reader` pub type Reader = super::ImageReader; #[deprecated(note = "this type has been moved to image::Limits")] /// Deprecated re-export of `Limits` pub type Limits = super::Limits; #[deprecated(note = "this type has been moved to image::LimitSupport")] /// Deprecated re-export of `LimitSupport` pub type LimitSupport = super::LimitSupport; } mod traits; mod utils; // Can't use the macro-call itself within the `doc` attribute. So force it to eval it as part of // the macro invocation. // // The inspiration for the macro and implementation is from // // // MIT License // // Copyright (c) 2018 Guillaume Gomez macro_rules! insert_as_doc { { $content:expr } => { #[allow(unused_doc_comments)] #[doc = $content] extern { } } } // Provides the README.md as doc, to ensure the example works! insert_as_doc!(include_str!("../README.md")); image-0.25.5/src/math/mod.rs000064400000000000000000000002061046102023000136420ustar 00000000000000//! Mathematical helper functions and types. mod rect; mod utils; pub use self::rect::Rect; pub(super) use utils::resize_dimensions; image-0.25.5/src/math/rect.rs000064400000000000000000000005631046102023000140260ustar 00000000000000/// A Rectangle defined by its top left corner, width and height. #[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)] pub struct Rect { /// The x coordinate of the top left corner. pub x: u32, /// The y coordinate of the top left corner. pub y: u32, /// The rectangle's width. pub width: u32, /// The rectangle's height. pub height: u32, } image-0.25.5/src/math/utils.rs000064400000000000000000000105501046102023000142260ustar 00000000000000//! Shared mathematical utility functions. use std::cmp::max; /// Calculates the width and height an image should be resized to. /// This preserves aspect ratio, and based on the `fill` parameter /// will either fill the dimensions to fit inside the smaller constraint /// (will overflow the specified bounds on one axis to preserve /// aspect ratio), or will shrink so that both dimensions are /// completely contained within the given `width` and `height`, /// with empty space on one axis. pub(crate) fn resize_dimensions( width: u32, height: u32, nwidth: u32, nheight: u32, fill: bool, ) -> (u32, u32) { let wratio = f64::from(nwidth) / f64::from(width); let hratio = f64::from(nheight) / f64::from(height); let ratio = if fill { f64::max(wratio, hratio) } else { f64::min(wratio, hratio) }; let nw = max((f64::from(width) * ratio).round() as u64, 1); let nh = max((f64::from(height) * ratio).round() as u64, 1); if nw > u64::from(u32::MAX) { let ratio = f64::from(u32::MAX) / f64::from(width); (u32::MAX, max((f64::from(height) * ratio).round() as u32, 1)) } else if nh > u64::from(u32::MAX) { let ratio = f64::from(u32::MAX) / f64::from(height); (max((f64::from(width) * ratio).round() as u32, 1), u32::MAX) } else { (nw as u32, nh as u32) } } #[cfg(test)] mod test { quickcheck! { fn resize_bounds_correctly_width(old_w: u32, new_w: u32) -> bool { if old_w == 0 || new_w == 0 { return true; } // In this case, the scaling is limited by scaling of height. // We could check that case separately but it does not conform to the same expectation. if new_w as u64 * 400u64 >= old_w as u64 * u64::from(u32::MAX) { return true; } let result = super::resize_dimensions(old_w, 400, new_w, u32::MAX, false); let exact = (400_f64 * new_w as f64 / old_w as f64).round() as u32; result.0 == new_w && result.1 == exact.max(1) } } quickcheck! { fn resize_bounds_correctly_height(old_h: u32, new_h: u32) -> bool { if old_h == 0 || new_h == 0 { return true; } // In this case, the scaling is limited by scaling of width. // We could check that case separately but it does not conform to the same expectation. if 400u64 * new_h as u64 >= old_h as u64 * u64::from(u32::MAX) { return true; } let result = super::resize_dimensions(400, old_h, u32::MAX, new_h, false); let exact = (400_f64 * new_h as f64 / old_h as f64).round() as u32; result.1 == new_h && result.0 == exact.max(1) } } #[test] fn resize_handles_fill() { let result = super::resize_dimensions(100, 200, 200, 500, true); assert!(result.0 == 250); assert!(result.1 == 500); let result = super::resize_dimensions(200, 100, 500, 200, true); assert!(result.0 == 500); assert!(result.1 == 250); } #[test] fn resize_never_rounds_to_zero() { let result = super::resize_dimensions(1, 150, 128, 128, false); assert!(result.0 > 0); assert!(result.1 > 0); } #[test] fn resize_handles_overflow() { let result = super::resize_dimensions(100, u32::MAX, 200, u32::MAX, true); assert!(result.0 == 100); assert!(result.1 == u32::MAX); let result = super::resize_dimensions(u32::MAX, 100, u32::MAX, 200, true); assert!(result.0 == u32::MAX); assert!(result.1 == 100); } #[test] fn resize_rounds() { // Only truncation will result in (3840, 2229) and (2160, 3719) let result = super::resize_dimensions(4264, 2476, 3840, 2160, true); assert_eq!(result, (3840, 2230)); let result = super::resize_dimensions(2476, 4264, 2160, 3840, false); assert_eq!(result, (2160, 3720)); } #[test] fn resize_handles_zero() { let result = super::resize_dimensions(0, 100, 100, 100, false); assert_eq!(result, (1, 100)); let result = super::resize_dimensions(100, 0, 100, 100, false); assert_eq!(result, (100, 1)); let result = super::resize_dimensions(100, 100, 0, 100, false); assert_eq!(result, (1, 1)); let result = super::resize_dimensions(100, 100, 100, 0, false); assert_eq!(result, (1, 1)); } } image-0.25.5/src/metadata.rs000064400000000000000000000103041046102023000137120ustar 00000000000000//! Types describing image metadata use std::io::{Cursor, Read}; use byteorder_lite::{BigEndian, LittleEndian, ReadBytesExt}; /// Describes the transformations to be applied to the image. /// Compatible with [Exif orientation](https://web.archive.org/web/20200412005226/https://www.impulseadventure.com/photo/exif-orientation.html). /// /// Orientation is specified in the file's metadata, and is often written by cameras. /// /// You can apply it to an image via [`DynamicImage::apply_orientation`](crate::DynamicImage::apply_orientation). #[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)] pub enum Orientation { /// Do not perform any transformations. NoTransforms, /// Rotate by 90 degrees clockwise. Rotate90, /// Rotate by 180 degrees. Can be performed in-place. Rotate180, /// Rotate by 270 degrees clockwise. Equivalent to rotating by 90 degrees counter-clockwise. Rotate270, /// Flip horizontally. Can be performed in-place. FlipHorizontal, /// Flip vertically. Can be performed in-place. FlipVertical, /// Rotate by 90 degrees clockwise and flip horizontally. Rotate90FlipH, /// Rotate by 270 degrees clockwise and flip horizontally. Rotate270FlipH, } impl Orientation { /// Converts from [Exif orientation](https://web.archive.org/web/20200412005226/https://www.impulseadventure.com/photo/exif-orientation.html) pub fn from_exif(exif_orientation: u8) -> Option { match exif_orientation { 1 => Some(Self::NoTransforms), 2 => Some(Self::FlipHorizontal), 3 => Some(Self::Rotate180), 4 => Some(Self::FlipVertical), 5 => Some(Self::Rotate90FlipH), 6 => Some(Self::Rotate90), 7 => Some(Self::Rotate270FlipH), 8 => Some(Self::Rotate270), 0 | 9.. => None, } } /// Converts into [Exif orientation](https://web.archive.org/web/20200412005226/https://www.impulseadventure.com/photo/exif-orientation.html) pub fn to_exif(self) -> u8 { match self { Self::NoTransforms => 1, Self::FlipHorizontal => 2, Self::Rotate180 => 3, Self::FlipVertical => 4, Self::Rotate90FlipH => 5, Self::Rotate90 => 6, Self::Rotate270FlipH => 7, Self::Rotate270 => 8, } } pub(crate) fn from_exif_chunk(chunk: &[u8]) -> Option { let mut reader = Cursor::new(chunk); let mut magic = [0; 4]; reader.read_exact(&mut magic).ok()?; match magic { [0x49, 0x49, 42, 0] => { let ifd_offset = reader.read_u32::().ok()?; reader.set_position(ifd_offset as u64); let entries = reader.read_u16::().ok()?; for _ in 0..entries { let tag = reader.read_u16::().ok()?; let format = reader.read_u16::().ok()?; let count = reader.read_u32::().ok()?; let value = reader.read_u16::().ok()?; let _padding = reader.read_u16::().ok()?; if tag == 0x112 && format == 3 && count == 1 { return Self::from_exif(value.min(255) as u8); } } } [0x4d, 0x4d, 0, 42] => { let ifd_offset = reader.read_u32::().ok()?; reader.set_position(ifd_offset as u64); let entries = reader.read_u16::().ok()?; for _ in 0..entries { let tag = reader.read_u16::().ok()?; let format = reader.read_u16::().ok()?; let count = reader.read_u32::().ok()?; let value = reader.read_u16::().ok()?; let _padding = reader.read_u16::().ok()?; if tag == 0x112 && format == 3 && count == 1 { return Self::from_exif(value.min(255) as u8); } } } _ => {} } None } } image-0.25.5/src/traits.rs000064400000000000000000000265131046102023000134510ustar 00000000000000//! This module provides useful traits that were deprecated in rust // Note copied from the stdlib under MIT license use num_traits::{Bounded, Num, NumCast}; use std::ops::AddAssign; use crate::color::{Luma, LumaA, Rgb, Rgba}; use crate::ExtendedColorType; /// Types which are safe to treat as an immutable byte slice in a pixel layout /// for image encoding. pub trait EncodableLayout: seals::EncodableLayout { /// Get the bytes of this value. fn as_bytes(&self) -> &[u8]; } impl EncodableLayout for [u8] { fn as_bytes(&self) -> &[u8] { bytemuck::cast_slice(self) } } impl EncodableLayout for [u16] { fn as_bytes(&self) -> &[u8] { bytemuck::cast_slice(self) } } impl EncodableLayout for [f32] { fn as_bytes(&self) -> &[u8] { bytemuck::cast_slice(self) } } /// The type of each channel in a pixel. For example, this can be `u8`, `u16`, `f32`. // TODO rename to `PixelComponent`? Split up into separate traits? Seal? pub trait Primitive: Copy + NumCast + Num + PartialOrd + Clone + Bounded { /// The maximum value for this type of primitive within the context of color. /// For floats, the maximum is `1.0`, whereas the integer types inherit their usual maximum values. const DEFAULT_MAX_VALUE: Self; /// The minimum value for this type of primitive within the context of color. /// For floats, the minimum is `0.0`, whereas the integer types inherit their usual minimum values. const DEFAULT_MIN_VALUE: Self; } macro_rules! declare_primitive { ($base:ty: ($from:expr)..$to:expr) => { impl Primitive for $base { const DEFAULT_MAX_VALUE: Self = $to; const DEFAULT_MIN_VALUE: Self = $from; } }; } declare_primitive!(usize: (0)..Self::MAX); declare_primitive!(u8: (0)..Self::MAX); declare_primitive!(u16: (0)..Self::MAX); declare_primitive!(u32: (0)..Self::MAX); declare_primitive!(u64: (0)..Self::MAX); declare_primitive!(isize: (Self::MIN)..Self::MAX); declare_primitive!(i8: (Self::MIN)..Self::MAX); declare_primitive!(i16: (Self::MIN)..Self::MAX); declare_primitive!(i32: (Self::MIN)..Self::MAX); declare_primitive!(i64: (Self::MIN)..Self::MAX); declare_primitive!(f32: (0.0)..1.0); declare_primitive!(f64: (0.0)..1.0); /// An `Enlargable::Larger` value should be enough to calculate /// the sum (average) of a few hundred or thousand Enlargeable values. pub trait Enlargeable: Sized + Bounded + NumCast { type Larger: Copy + NumCast + Num + PartialOrd + Clone + Bounded + AddAssign; fn clamp_from(n: Self::Larger) -> Self { if n > Self::max_value().to_larger() { Self::max_value() } else if n < Self::min_value().to_larger() { Self::min_value() } else { NumCast::from(n).unwrap() } } fn to_larger(self) -> Self::Larger { NumCast::from(self).unwrap() } } impl Enlargeable for u8 { type Larger = u32; } impl Enlargeable for u16 { type Larger = u32; } impl Enlargeable for u32 { type Larger = u64; } impl Enlargeable for u64 { type Larger = u128; } impl Enlargeable for usize { // Note: On 32-bit architectures, u64 should be enough here. type Larger = u128; } impl Enlargeable for i8 { type Larger = i32; } impl Enlargeable for i16 { type Larger = i32; } impl Enlargeable for i32 { type Larger = i64; } impl Enlargeable for i64 { type Larger = i128; } impl Enlargeable for isize { // Note: On 32-bit architectures, i64 should be enough here. type Larger = i128; } impl Enlargeable for f32 { type Larger = f64; } impl Enlargeable for f64 { type Larger = f64; } /// Linear interpolation without involving floating numbers. pub trait Lerp: Bounded + NumCast { type Ratio: Primitive; fn lerp(a: Self, b: Self, ratio: Self::Ratio) -> Self { let a = ::from(a).unwrap(); let b = ::from(b).unwrap(); let res = a + (b - a) * ratio; if res > NumCast::from(Self::max_value()).unwrap() { Self::max_value() } else if res < NumCast::from(0).unwrap() { NumCast::from(0).unwrap() } else { NumCast::from(res).unwrap() } } } impl Lerp for u8 { type Ratio = f32; } impl Lerp for u16 { type Ratio = f32; } impl Lerp for u32 { type Ratio = f64; } impl Lerp for f32 { type Ratio = f32; fn lerp(a: Self, b: Self, ratio: Self::Ratio) -> Self { a + (b - a) * ratio } } /// The pixel with an associated `ColorType`. /// Not all possible pixels represent one of the predefined `ColorType`s. pub trait PixelWithColorType: Pixel + private::SealedPixelWithColorType { /// This pixel has the format of one of the predefined `ColorType`s, /// such as `Rgb8`, `La16` or `Rgba32F`. /// This is needed for automatically detecting /// a color format when saving an image as a file. const COLOR_TYPE: ExtendedColorType; } impl PixelWithColorType for Rgb { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::Rgb8; } impl PixelWithColorType for Rgb { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::Rgb16; } impl PixelWithColorType for Rgb { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::Rgb32F; } impl PixelWithColorType for Rgba { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::Rgba8; } impl PixelWithColorType for Rgba { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::Rgba16; } impl PixelWithColorType for Rgba { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::Rgba32F; } impl PixelWithColorType for Luma { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::L8; } impl PixelWithColorType for Luma { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::L16; } impl PixelWithColorType for LumaA { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::La8; } impl PixelWithColorType for LumaA { const COLOR_TYPE: ExtendedColorType = ExtendedColorType::La16; } /// Prevents down-stream users from implementing the `Primitive` trait mod private { use crate::color::*; pub trait SealedPixelWithColorType {} impl SealedPixelWithColorType for Rgb {} impl SealedPixelWithColorType for Rgb {} impl SealedPixelWithColorType for Rgb {} impl SealedPixelWithColorType for Rgba {} impl SealedPixelWithColorType for Rgba {} impl SealedPixelWithColorType for Rgba {} impl SealedPixelWithColorType for Luma {} impl SealedPixelWithColorType for LumaA {} impl SealedPixelWithColorType for Luma {} impl SealedPixelWithColorType for LumaA {} } /// A generalized pixel. /// /// A pixel object is usually not used standalone but as a view into an image buffer. pub trait Pixel: Copy + Clone { /// The scalar type that is used to store each channel in this pixel. type Subpixel: Primitive; /// The number of channels of this pixel type. const CHANNEL_COUNT: u8; /// Returns the components as a slice. fn channels(&self) -> &[Self::Subpixel]; /// Returns the components as a mutable slice fn channels_mut(&mut self) -> &mut [Self::Subpixel]; /// A string that can help to interpret the meaning each channel /// See [gimp babl](http://gegl.org/babl/). const COLOR_MODEL: &'static str; /// Returns the channels of this pixel as a 4 tuple. If the pixel /// has less than 4 channels the remainder is filled with the maximum value #[deprecated(since = "0.24.0", note = "Use `channels()` or `channels_mut()`")] fn channels4( &self, ) -> ( Self::Subpixel, Self::Subpixel, Self::Subpixel, Self::Subpixel, ); /// Construct a pixel from the 4 channels a, b, c and d. /// If the pixel does not contain 4 channels the extra are ignored. #[deprecated( since = "0.24.0", note = "Use the constructor of the pixel, for example `Rgba([r,g,b,a])` or `Pixel::from_slice`" )] fn from_channels( a: Self::Subpixel, b: Self::Subpixel, c: Self::Subpixel, d: Self::Subpixel, ) -> Self; /// Returns a view into a slice. /// /// Note: The slice length is not checked on creation. Thus the caller has to ensure /// that the slice is long enough to prevent panics if the pixel is used later on. fn from_slice(slice: &[Self::Subpixel]) -> &Self; /// Returns mutable view into a mutable slice. /// /// Note: The slice length is not checked on creation. Thus the caller has to ensure /// that the slice is long enough to prevent panics if the pixel is used later on. fn from_slice_mut(slice: &mut [Self::Subpixel]) -> &mut Self; /// Convert this pixel to RGB fn to_rgb(&self) -> Rgb; /// Convert this pixel to RGB with an alpha channel fn to_rgba(&self) -> Rgba; /// Convert this pixel to luma fn to_luma(&self) -> Luma; /// Convert this pixel to luma with an alpha channel fn to_luma_alpha(&self) -> LumaA; /// Apply the function ```f``` to each channel of this pixel. fn map(&self, f: F) -> Self where F: FnMut(Self::Subpixel) -> Self::Subpixel; /// Apply the function ```f``` to each channel of this pixel. fn apply(&mut self, f: F) where F: FnMut(Self::Subpixel) -> Self::Subpixel; /// Apply the function ```f``` to each channel except the alpha channel. /// Apply the function ```g``` to the alpha channel. fn map_with_alpha(&self, f: F, g: G) -> Self where F: FnMut(Self::Subpixel) -> Self::Subpixel, G: FnMut(Self::Subpixel) -> Self::Subpixel; /// Apply the function ```f``` to each channel except the alpha channel. /// Apply the function ```g``` to the alpha channel. Works in-place. fn apply_with_alpha(&mut self, f: F, g: G) where F: FnMut(Self::Subpixel) -> Self::Subpixel, G: FnMut(Self::Subpixel) -> Self::Subpixel; /// Apply the function ```f``` to each channel except the alpha channel. fn map_without_alpha(&self, f: F) -> Self where F: FnMut(Self::Subpixel) -> Self::Subpixel, { let mut this = *self; this.apply_with_alpha(f, |x| x); this } /// Apply the function ```f``` to each channel except the alpha channel. /// Works in place. fn apply_without_alpha(&mut self, f: F) where F: FnMut(Self::Subpixel) -> Self::Subpixel, { self.apply_with_alpha(f, |x| x); } /// Apply the function ```f``` to each channel of this pixel and /// ```other``` pairwise. fn map2(&self, other: &Self, f: F) -> Self where F: FnMut(Self::Subpixel, Self::Subpixel) -> Self::Subpixel; /// Apply the function ```f``` to each channel of this pixel and /// ```other``` pairwise. Works in-place. fn apply2(&mut self, other: &Self, f: F) where F: FnMut(Self::Subpixel, Self::Subpixel) -> Self::Subpixel; /// Invert this pixel fn invert(&mut self); /// Blend the color of a given pixel into ourself, taking into account alpha channels fn blend(&mut self, other: &Self); } /// Private module for supertraits of sealed traits. mod seals { pub trait EncodableLayout {} impl EncodableLayout for [u8] {} impl EncodableLayout for [u16] {} impl EncodableLayout for [f32] {} } image-0.25.5/src/utils/mod.rs000064400000000000000000000074441046102023000140640ustar 00000000000000//! Utilities use std::iter::repeat; #[inline(always)] pub(crate) fn expand_packed(buf: &mut [u8], channels: usize, bit_depth: u8, mut func: F) where F: FnMut(u8, &mut [u8]), { let pixels = buf.len() / channels * bit_depth as usize; let extra = pixels % 8; let entries = pixels / 8 + match extra { 0 => 0, _ => 1, }; let mask = ((1u16 << bit_depth) - 1) as u8; let i = (0..entries) .rev() // Reverse iterator .flat_map(|idx| // This has to be reversed to (0..8/bit_depth).map(|i| i*bit_depth).zip(repeat(idx))) .skip(extra); let buf_len = buf.len(); let j_inv = (channels..buf_len).step_by(channels); for ((shift, i), j_inv) in i.zip(j_inv) { let j = buf_len - j_inv; let pixel = (buf[i] & (mask << shift)) >> shift; func(pixel, &mut buf[j..(j + channels)]); } } /// Expand a buffer of packed 1, 2, or 4 bits integers into u8's. Assumes that /// every `row_size` entries there are padding bits up to the next byte boundary. #[allow(dead_code)] // When no image formats that use it are enabled pub(crate) fn expand_bits(bit_depth: u8, row_size: u32, buf: &[u8]) -> Vec { // Note: this conversion assumes that the scanlines begin on byte boundaries let mask = (1u8 << bit_depth as usize) - 1; let scaling_factor = 255 / ((1 << bit_depth as usize) - 1); let bit_width = row_size * u32::from(bit_depth); let skip = if bit_width % 8 == 0 { 0 } else { (8 - bit_width % 8) / u32::from(bit_depth) }; let row_len = row_size + skip; let mut p = Vec::new(); let mut i = 0; for v in buf { for shift_inv in 1..=8 / bit_depth { let shift = 8 - bit_depth * shift_inv; // skip the pixels that can be neglected because scanlines should // start at byte boundaries if i % (row_len as usize) < (row_size as usize) { let pixel = (v & mask << shift as usize) >> shift as usize; p.push(pixel * scaling_factor); } i += 1; } } p } /// Checks if the provided dimensions would cause an overflow. #[allow(dead_code)] // When no image formats that use it are enabled pub(crate) fn check_dimension_overflow(width: u32, height: u32, bytes_per_pixel: u8) -> bool { u64::from(width) * u64::from(height) > u64::MAX / u64::from(bytes_per_pixel) } #[allow(dead_code)] // When no image formats that use it are enabled pub(crate) fn vec_copy_to_u8(vec: &[T]) -> Vec where T: bytemuck::Pod, { bytemuck::cast_slice(vec).to_owned() } #[inline] pub(crate) fn clamp(a: N, min: N, max: N) -> N where N: PartialOrd, { if a < min { min } else if a > max { max } else { a } } #[cfg(test)] mod test { #[test] fn gray_to_luma8_skip() { let check = |bit_depth, w, from, to| { assert_eq!(super::expand_bits(bit_depth, w, from), to); }; // Bit depth 1, skip is more than half a byte check( 1, 10, &[0b11110000, 0b11000000, 0b00001111, 0b11000000], vec![ 255, 255, 255, 255, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, ], ); // Bit depth 2, skip is more than half a byte check( 2, 5, &[0b11110000, 0b11000000, 0b00001111, 0b11000000], vec![255, 255, 0, 0, 255, 0, 0, 255, 255, 255], ); // Bit depth 2, skip is 0 check( 2, 4, &[0b11110000, 0b00001111], vec![255, 255, 0, 0, 0, 0, 255, 255], ); // Bit depth 4, skip is half a byte check(4, 1, &[0b11110011, 0b00001100], vec![255, 0]); } }