pyzstd-0.19.1/CHANGELOG.md0000644000000000000000000001714013615410400011673 0ustar00# Changelog All notable changes to this project will be documented in this file. ## 0.19.1 (November 13, 2025) - Fix `SeekableZstdFile` write table entries on 32-bits architectures when there is a huge number of entries ## 0.19.0 (November 7, 2025) - The project has been completely refactored to use the Zstandard implementation from the standard library ([PEP-784](https://peps.python.org/pep-0784/)) - The refactor has some minor impact on public APIs, such as changing the exception raised on invalid input - Add `backports.zstd` dependency for Python before 3.14 - Changes in build dependency: remove `setuptools` and C build toolchain, add `hatchling` and `hatch-vcs` - Remove git submodule usage - Drop support for Python 3.9 and below - Use `ruff` as formatter and linter - Embed type hints in Python code, and check with `mypy` ## 0.18.0 (October 5, 2025) - Support for Python 3.14 - Deprecate the `read_size` and `write_size` parameters of `ZstdFile` and `SeekableZstdFile` - Deprecate `richmem_compress` and `RichMemZstdCompressor` - Rework documentation to suggest using `compression.zstd` from Python stdlib, and provide a migration guide - Include the `zstd` library license in package distributions ## 0.17.0 (May 10, 2025) - Upgrade zstd source code from v1.5.6 to [v1.5.7](https://github.com/facebook/zstd/releases/tag/v1.5.7) - Raise an exception when attempting to decompress empty data - Add `ZstdFile.name` property - Deprecate `(de)compress_stream` functions - Use a leading `_` for private objects - Build wheels for Windows ARM64 - Support for PyPy 3.11 ## 0.16.2 (October 10, 2024) - Build wheels for Python 3.13 - Deprecate support for Python version before 3.9 and stop building wheels for them ## 0.16.1 (August 4, 2024) - Compatibility with Python 3.13 ## 0.16.0 (May 20, 2024) - Upgrade zstd source code from v1.5.5 to [v1.5.6](https://github.com/facebook/zstd/releases/tag/v1.5.6) - Fix pyzstd_pep517 parameter name in `get_requires_for_build_wheel` - Deprecate support for Python version before 3.8 and stop building wheels for them - Minor fixes in type hints - Refactor README & CHANGELOG files ## 0.15.10 (Mar 24, 2024) - Fix `SeekableZstdFile` class can't open new file in appending mode. - Support sub-interpreter on CPython 3.12+, can utilize [per-interpreter GIL](https://docs.python.org/3.12/whatsnew/3.12.html#pep-684-a-per-interpreter-gil). - On CPython(3.5~3.12)+Linux, use another output buffer code that can utilize the `mremap` mechanism. - Change repository URL and maintainer following the deletion of the GitHub account of the original author, Ma Lin (animalize). See [#1](https://github.com/Rogdham/pyzstd/issues/1). ## 0.15.9 (Jun 24, 2023) ZstdFile class related changes: - Add [`SeekableZstdFile`](https://pyzstd.readthedocs.io/#SeekableZstdFile) class, it's a subclass of `ZstdFile`, supports [Zstandard Seekable Format](https://github.com/facebook/zstd/blob/dev/contrib/seekable_format/zstd_seekable_compression_format.md). - Add _mode_ argument to `ZstdFile.flush()` method, now it can flush a zstd frame. - Add _read_size_ and _write_size_ arguments to `ZstdFile.__init__()` method, can work with Network File Systems better. - Optimize `ZstdFile` performance to C language level. ## 0.15.7 (Apr 21, 2023) ZstdDict class changes: - Fix these advanced compression parameters may be ignored when loading a dictionary: `windowLog`, `hashLog`, `chainLog`, `searchLog`, `minMatch`, `targetLength`, `strategy`, `enableLongDistanceMatching`, `ldmHashLog`, `ldmMinMatch`, `ldmBucketSizeLog`, `ldmHashRateLog`, and some non-public parameters. - When compressing, load undigested dictionary instead of digested dictionary by default. Loading again an undigested is slower, see [differences](https://pyzstd.readthedocs.io/#ZstdDict.as_digested_dict). - Add [`.as_prefix`](https://pyzstd.readthedocs.io/#ZstdDict.as_prefix) attribute. Can use zstd as a [patching engine](https://pyzstd.readthedocs.io/#patching-engine). ## 0.15.6 (Apr 5, 2023) - Upgrade zstd source code from v1.5.4 to [v1.5.5](https://github.com/facebook/zstd/releases/tag/v1.5.5). ## 0.15.4 (Feb 24, 2023) - Upgrade zstd source code from v1.5.2 to [v1.5.4](https://github.com/facebook/zstd/releases/tag/v1.5.4). v1.5.3 is a non-public release. - Support `pyproject.toml` build mechanism (PEP-517). Note that specifying build options in old way may be invalid, see [build commands](https://pyzstd.readthedocs.io/#build-pyzstd). - Support "multi-phase initialization" (PEP-489) on CPython 3.11+, can work with CPython sub-interpreters in the future. Currently this build option is disabled by default. - Add a command line interface (CLI). ## 0.15.3 (Aug 3, 2022) - Fix `ZstdError` object can't be pickled. ## 0.15.2 (Jan 22, 2022) - Upgrade zstd source code from v1.5.1 to [v1.5.2](https://github.com/facebook/zstd/releases/tag/v1.5.2). ## 0.15.1 (Dec 25, 2021) - Upgrade zstd source code from v1.5.0 to [v1.5.1](https://github.com/facebook/zstd/releases/tag/v1.5.1). - Fix `ZstdFile.write()` / `train_dict()` / `finalize_dict()` may use wrong length for some buffer protocol objects. - Two behavior changes: - Setting `CParameter.nbWorkers` to `1` now means "1-thread multi-threaded mode", rather than "single-threaded mode". - If the underlying zstd library doesn't support multi-threaded compression, no longer automatically fallback to "single-threaded mode", now raise a `ZstdError` exception. - Add a module level variable [`zstd_support_multithread`](https://pyzstd.readthedocs.io/#zstd_support_multithread). - Add a setup.py option `--avx2`, see [build options](https://pyzstd.readthedocs.io/#build-pyzstd). ## 0.15.0 (May 18, 2021) - Upgrade zstd source code from v1.4.9 to [v1.5.0](https://github.com/facebook/zstd/releases/tag/v1.5.0). - Some improvements, no API changes. ## 0.14.4 (Mar 24, 2021) - Add a CFFI implementation that can work with PyPy. - Allow dynamically link to zstd library. ## 0.14.3 (Mar 4, 2021) - Upgrade zstd source code from v1.4.8 to [v1.4.9](https://github.com/facebook/zstd/releases/tag/v1.4.9). ## 0.14.2 (Feb 24, 2021) - Add two convenient functions: [`compress_stream()`](https://pyzstd.readthedocs.io/#compress_stream) and [`decompress_stream()`](https://pyzstd.readthedocs.io/#decompress_stream). - Some improvements. ## 0.14.1 (Dec 19, 2020) - Upgrade zstd source code from v1.4.5 to [v1.4.8](https://github.com/facebook/zstd/releases/tag/v1.4.8). - v1.4.6 is a non-public release for Linux kernel. - v1.4.8 is a hotfix for [v1.4.7](https://github.com/facebook/zstd/releases/tag/v1.4.7). - Some improvements, no API changes. ## 0.13.0 (Nov 7, 2020) - `ZstdDecompressor` class: now it has the same API and behavior as BZ2Decompressor / LZMADecompressor classes in Python standard library, it stops after a frame is decompressed. - Add an `EndlessZstdDecompressor` class, it accepts multiple concatenated frames. It is renamed from previous `ZstdDecompressor` class, but `.at_frame_edge` is `True` when both the input and output streams are at a frame edge. - Rename `zstd_open()` function to `open()`, consistent with Python standard library. - `decompress()` function: - ~9% faster when: there is one frame, and the decompressed size was recorded in frame header. - raises ZstdError when input **or** output data is not at a frame edge. Previously, it only raise for output data is not at a frame edge. ## 0.12.5 (Oct 12, 2020) - No longer use [Argument Clinic](https://docs.python.org/3/howto/clinic.html), now supports Python 3.5+, previously 3.7+. ## 0.12.4 (Oct 7, 2020) - It seems the API is stable. ## 0.2.4 (Sep 2, 2020) - The first version upload to PyPI. - Includes zstd [v1.4.5](https://github.com/facebook/zstd/releases/tag/v1.4.5) source code. pyzstd-0.19.1/requirements-dev.txt0000644000000000000000000000007013615410400014114 0ustar00-e . -r requirements-lint.txt -r requirements-type.txt pyzstd-0.19.1/requirements-lint.txt0000644000000000000000000000001513615410400014303 0ustar00ruff==0.14.8 pyzstd-0.19.1/requirements-type.txt0000644000000000000000000000001513615410400014316 0ustar00mypy==1.19.0 pyzstd-0.19.1/.github/workflows/build.yml0000644000000000000000000000612313615410400015300 0ustar00name: build on: push: branches: - "master" - "ci-*" tags: - "**" pull_request: workflow_dispatch: env: PY_COLORS: 1 jobs: build: name: Build runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 with: # fetch all commits for version computation fetch-depth: 0 - name: Setup Python uses: actions/setup-python@v6 with: python-version: "3.14" - name: Install dependencies run: python -m pip install -U build - name: Build run: python -m build - name: List distributions run: ls -lR dist - name: Save build artifacts uses: actions/upload-artifact@v5 with: name: build path: dist - name: Install sdist run: python -m pip install dist/*.tar.gz - name: Test run: python -m unittest discover tests -v tests-py: name: Test | ${{ matrix.python }} runs-on: ubuntu-latest needs: - build strategy: matrix: python: - "3.10" - "3.11" - "3.12" - "3.13" - "3.14" - "pypy-3.10" - "pypy-3.11" steps: - uses: actions/checkout@v6 - name: Restore build artifacts uses: actions/download-artifact@v6 with: name: build path: dist - name: Setup Python ${{ matrix.python }} uses: actions/setup-python@v6 with: python-version: ${{ matrix.python }} - name: Install wheel run: python -m pip install dist/*.whl - name: Test run: python -m unittest discover tests -v lint: name: Lint runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 - name: Setup Python uses: actions/setup-python@v6 with: python-version: 3.14 - name: Install dependencies run: python -m pip install -r requirements-lint.txt - name: ruff check run: ruff check - name: ruff format run: ruff format --check type: name: Type runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 - name: Setup Python uses: actions/setup-python@v6 with: python-version: 3.14 - name: Install dependencies run: python -m pip install -r requirements-type.txt - name: Create _version.py run: echo '__version__ = ""' > src/pyzstd/_version.py - name: mypy run: mypy publish: name: Publish to PyPI if: startsWith(github.ref, 'refs/tags') needs: - build - tests-py runs-on: ubuntu-latest environment: publish permissions: id-token: write # This permission is mandatory for trusted publishing steps: - name: Restore build artifacts uses: actions/download-artifact@v6 with: name: build path: dist - name: List distributions run: ls -lR dist - name: Publish to PyPI uses: pypa/gh-action-pypi-publish@release/v1 with: verbose: true print-hash: true pyzstd-0.19.1/docs/.readthedocs.yaml0000644000000000000000000000107513615410400014241 0ustar00# .readthedocs.yaml # Read the Docs configuration file # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details # Required version: 2 # Set the version of Python and other tools you might need build: os: ubuntu-24.04 tools: python: "3.13" # Build documentation in the docs/ directory with Sphinx sphinx: configuration: docs/conf.py # We recommend specifying your dependencies to enable reproducible builds: # https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html python: install: - requirements: docs/requirements.txt pyzstd-0.19.1/docs/conf.py0000644000000000000000000000040213615410400012302 0ustar00project = "pyzstd module" author = "Ma Lin and contributors" copyright = "2020-present, Ma Lin and contributors" language = "en" master_doc = "index" pygments_style = "sphinx" extensions = ["myst_parser", "sphinx_rtd_theme"] html_theme = "sphinx_rtd_theme" pyzstd-0.19.1/docs/deprecated.md0000644000000000000000000000556513615410400013444 0ustar00# pyzstd module: deprecations ## `compress_stream` ```python # before with io.open(input_file_path, 'rb') as ifh: with io.open(output_file_path, 'wb') as ofh: compress_stream(ifh, ofh, level_or_option=5) # after with io.open(input_file_path, 'rb') as ifh: with pyzstd.open(output_file_path, 'w', level_or_option=5) as ofh: shutil.copyfileobj(ifh, ofh) ``` ```{hint} Instead of the `read_size` and `write_size` parameters, you can use `shutil.copyfileobj`'s `length` parameter. ``` Alternatively, you can use `ZstdCompressor` to have more control: ```python # after: more complex alternative with io.open(input_file_path, 'rb') as ifh: with io.open(output_file_path, 'wb') as ofh: compressor = ZstdCompressor(level_or_option=5) compressor._set_pledged_input_size(pledged_input_size) # optional while data := ifh.read(read_size): ofh.write(compressor.compress(data)) callback_progress(ifh.tell(), ofh.tell()) # optional ofh.write(compressor.flush()) ``` _Deprecated in version 0.17.0._ ## `decompress_stream` ```python # before with io.open(input_file_path, 'rb') as ifh: with io.open(output_file_path, 'wb') as ofh: decompress_stream(ifh, ofh) # after with pyzstd.open(input_file_path) as ifh: with io.open(output_file_path, 'wb') as ofh: shutil.copyfileobj(ifh, ofh) ``` ```{hint} Instead of the `read_size` and `write_size` parameters, you can use `shutil.copyfileobj`'s `length` parameter. ``` Alternatively, you can use `EndlessZstdDecompressor` to have more control: ```python # after: more complex alternative with io.open(input_file_path, 'rb') as ifh: with io.open(output_file_path, 'wb') as ofh: decompressor = EndlessZstdDecompressor() while True: if decompressor.needs_input: data = input_stream.read(read_size) if not data: break else: data = b"" ofh.write(decompressor.decompress(data, write_size)) callback_progress(ifh.tell(), ofh.tell()) # optional if not decompressor.at_frame_edge: raise ValueError("zstd data ends in an incomplete frame") ``` _Deprecated in version 0.17.0._ ## `richmem_compress` ```python # before data_out = pyzstd.richmem_compress(data_in, level_or_option=5) # after data_out = pyzstd.compress(data_in, level_or_option=5) ``` _Deprecated in version 0.18.0._ ## `RichMemZstdCompressor` ```python # before compressor = pyzstd.RichMemZstdCompressor(level_or_option=5) data_out1 = compressor.compress(data_in1) data_out2 = compressor.compress(data_in2) data_out3 = compressor.compress(data_in3) # after data_out1 = pyzstd.compress(data_in1, level_or_option=5) data_out2 = pyzstd.compress(data_in2, level_or_option=5) data_out3 = pyzstd.compress(data_in3, level_or_option=5) ``` _Deprecated in version 0.18.0._ pyzstd-0.19.1/docs/index.md0000644000000000000000000000276613615410400012453 0ustar00```{toctree} :hidden: :maxdepth: 2 Home Migration to stdlib Module reference Deprecations ``` # pyzstd The `pyzstd` library was created by Ma Lin in 2020 to provide Python support for [Zstandard](http://www.zstd.net), using an API style similar to the `bz2`, `lzma`, and `zlib` modules. In 2025, an effort led by [Emma Smith](https://github.com/emmatyping) (now a CPython core developer) resulted in [PEP 784][] and the inclusion of the [`compression.zstd` module][compression.zstd] in the Python 3.14 standard library. The implementation was adapted from `pyzstd`, with its maintainer [Rogdham](https://github.com/rogdham) contributing directly to the effort. Rogdham also developed the [`backports.zstd` library][backports.zstd] which backports the `compression.zstd` APIs to older Python versions. In version 0.19.0, `pyzstd` became a pure-Python package by using the `compression.zstd` module internally. Recommendations: - **New projects**: use the standard library [`compression.zstd` module][compression.zstd], with [`backports.zstd`][backports.zstd] as a fallback for older Python versions. - **Existing projects**: consider [migrating to the standard library implementation](./stdlib.md). In the meanwhile, [documentation for the `pyzstd` module is available here](./pyzstd.rst). [PEP 784]: https://peps.python.org/pep-0784/ [compression.zstd]: https://docs.python.org/3.14/library/compression.zstd.html [backports.zstd]: https://github.com/Rogdham/backports.zstd pyzstd-0.19.1/docs/pyzstd.rst0000644000000000000000000020076613615410400013111 0ustar00======================= pyzstd module reference ======================= Introduction ------------ The pyzstd module provides classes and functions for compressing and decompressing data using `Zstandard `_ (or zstd for short) algorithm. The API style is similar to Python's bz2/lzma/zlib modules. * Pure-Python package relying on the `compression.zstd` module internally (`PEP 784 `_). * Supports `Zstandard Seekable Format `__ * Has a command line interface, ``python -m pyzstd --help``. Links: `GitHub page `_, `PyPI page `_. Features of zstd: * Fast compression and decompression speed. * If use :ref:`multi-threaded compression`, the compression speed improves significantly. * If use pre-trained :ref:`dictionary`, the compression ratio on small data (a few KiB) improves dramatically. * :ref:`Frame and block` allow the use more flexible, suitable for many scenarios. * Can be used as a :ref:`patching engine`. .. note:: Other zstd implementations for Python: * `compression.zstd `_, in the standard library since Python 3.14. * `backports.zstd `_, the backport of the stdlib implementation for Python versions before 3.14. * `zstd `_, a very simple module. * `zstandard `_, provides rich API. Exception --------- .. py:exception:: ZstdError This exception is raised when an error occurs when calling the underlying zstd library. Subclass of ``Exception``. Simple compression/decompression -------------------------------- This section contains: * function :py:func:`compress` * function :py:func:`decompress` .. hint:: If there are a big number of same type individual data, reuse these objects may eliminate the small overhead of creating context / setting parameters / loading dictionary. * :py:class:`ZstdCompressor` .. py:function:: compress(data, level_or_option=None, zstd_dict=None) Compress *data*, return the compressed data. Compressing ``b''`` will get an empty content frame (9 bytes or more). :param data: Data to be compressed. :type data: bytes-like object :param level_or_option: When it's an ``int`` object, it represents :ref:`compression level`. When it's a ``dict`` object, it contains :ref:`advanced compression parameters`. The default value ``None`` means to use zstd's default compression level/parameters. :type level_or_option: int or dict :param zstd_dict: Pre-trained dictionary for compression. :type zstd_dict: ZstdDict :return: Compressed data :rtype: bytes .. sourcecode:: python # int compression level compressed_dat = compress(raw_dat, 10) # dict option, use 6 threads to compress, and append a 4-byte checksum. option = {CParameter.compressionLevel : 10, CParameter.nbWorkers : 6, CParameter.checksumFlag : 1} compressed_dat = compress(raw_dat, option) .. py:function:: decompress(data, zstd_dict=None, option=None) Decompress *data*, return the decompressed data. Support multiple concatenated :ref:`frames`. :param data: Data to be decompressed. :type data: bytes-like object :param zstd_dict: Pre-trained dictionary for decompression. :type zstd_dict: ZstdDict :param option: A ``dict`` object that contains :py:ref:`advanced decompression parameters`. The default value ``None`` means to use zstd's default decompression parameters. :type option: dict :return: Decompressed data :rtype: bytes :raises ZstdError: If decompression fails. .. _stream_compression: Streaming compression --------------------- You can use :py:class:`ZstdFile` for compressing data as needed. Advanced users may be interested in: * class :py:class:`ZstdCompressor`, similar to compressors in Python standard library. It would be nice to know some knowledge about zstd data, see :ref:`frame and block`. .. py:class:: ZstdCompressor A streaming compressor. It's thread-safe at method level. .. py:method:: __init__(self, level_or_option=None, zstd_dict=None) Initialize a ZstdCompressor object. :param level_or_option: When it's an ``int`` object, it represents the :ref:`compression level`. When it's a ``dict`` object, it contains :ref:`advanced compression parameters`. The default value ``None`` means to use zstd's default compression level/parameters. :type level_or_option: int or dict :param zstd_dict: Pre-trained dictionary for compression. :type zstd_dict: ZstdDict .. py:method:: compress(self, data, mode=ZstdCompressor.CONTINUE) Provide data to the compressor object. :param data: Data to be compressed. :type data: bytes-like object :param mode: Can be these 3 values: :py:attr:`ZstdCompressor.CONTINUE`, :py:attr:`ZstdCompressor.FLUSH_BLOCK`, :py:attr:`ZstdCompressor.FLUSH_FRAME`. :return: A chunk of compressed data if possible, or ``b''`` otherwise. :rtype: bytes .. py:method:: flush(self, mode=ZstdCompressor.FLUSH_FRAME) Flush any remaining data in internal buffer. Since zstd data consists of one or more independent frames, the compressor object can still be used after this method is called. **Note**: Abuse of this method will reduce compression ratio, and some programs can only decompress single frame data. Use it only when necessary. :param mode: Can be these 2 values: :py:attr:`ZstdCompressor.FLUSH_FRAME`, :py:attr:`ZstdCompressor.FLUSH_BLOCK`. :return: Flushed data. :rtype: bytes .. py:attribute:: last_mode The last mode used to this compressor, its value can be :py:attr:`~ZstdCompressor.CONTINUE`, :py:attr:`~ZstdCompressor.FLUSH_BLOCK`, :py:attr:`~ZstdCompressor.FLUSH_FRAME`. Initialized to :py:attr:`~ZstdCompressor.FLUSH_FRAME`. It can be used to get the current state of a compressor, such as, data flushed, a frame ended. .. py:attribute:: CONTINUE Used for *mode* parameter in :py:meth:`~ZstdCompressor.compress` method. Collect more data, encoder decides when to output compressed result, for optimal compression ratio. Usually used for traditional streaming compression. .. py:attribute:: FLUSH_BLOCK Used for *mode* parameter in :py:meth:`~ZstdCompressor.compress`, :py:meth:`~ZstdCompressor.flush` methods. Flush any remaining data, but don't close the current :ref:`frame`. Usually used for communication scenarios. If there is data, it creates at least one new :ref:`block`, that can be decoded immediately on reception. If no remaining data, no block is created, return ``b''``. **Note**: Abuse of this mode will reduce compression ratio. Use it only when necessary. .. py:attribute:: FLUSH_FRAME Used for *mode* parameter in :py:meth:`~ZstdCompressor.compress`, :py:meth:`~ZstdCompressor.flush` methods. Flush any remaining data, and close the current :ref:`frame`. Usually used for traditional flush. Since zstd data consists of one or more independent frames, data can still be provided after a frame is closed. **Note**: Abuse of this mode will reduce compression ratio, and some programs can only decompress single frame data. Use it only when necessary. .. sourcecode:: python c = ZstdCompressor() # traditional streaming compression dat1 = c.compress(b'123456') dat2 = c.compress(b'abcdef') dat3 = c.flush() # use .compress() method with mode argument compressed_dat1 = c.compress(raw_dat1, c.FLUSH_BLOCK) compressed_dat2 = c.compress(raw_dat2, c.FLUSH_FRAME) .. hint:: Why :py:meth:`ZstdCompressor.compress` method has a *mode* parameter? #. When reuse :py:class:`ZstdCompressor` object for big number of same type individual data, make operation more convenient. The object is thread-safe at method level. #. If data is generated by a single :py:attr:`~ZstdCompressor.FLUSH_FRAME` mode, the size of uncompressed data will be recorded in frame header. Streaming decompression ----------------------- You can use :py:class:`ZstdFile` for decompressing data as needed. Advanced users may be interested in: * class :py:class:`ZstdDecompressor`, similar to decompressors in Python standard library. * class :py:class:`EndlessZstdDecompressor`, a decompressor accepts multiple concatenated :ref:`frames`. .. py:class:: ZstdDecompressor A streaming decompressor. After a :ref:`frame` is decompressed, it stops and sets :py:attr:`~ZstdDecompressor.eof` flag to ``True``. For multiple frames data, use :py:class:`EndlessZstdDecompressor`. Thread-safe at method level. .. py:method:: __init__(self, zstd_dict=None, option=None) Initialize a ZstdDecompressor object. :param zstd_dict: Pre-trained dictionary for decompression. :type zstd_dict: ZstdDict :param dict option: A ``dict`` object that contains :ref:`advanced decompression parameters`. The default value ``None`` means to use zstd's default decompression parameters. .. py:method:: decompress(self, data, max_length=-1) Decompress *data*, returning decompressed data as a ``bytes`` object. After a :ref:`frame` is decompressed, it stops and sets :py:attr:`~ZstdDecompressor.eof` flag to ``True``. :param data: Data to be decompressed. :type data: bytes-like object :param int max_length: Maximum size of returned data. When it's negative, the output size is unlimited. When it's non-negative, returns at most *max_length* bytes of decompressed data. If this limit is reached and further output can (or may) be produced, the :py:attr:`~ZstdDecompressor.needs_input` attribute will be set to ``False``. In this case, the next call to this method may provide *data* as ``b''`` to obtain more of the output. .. py:attribute:: needs_input If the *max_length* output limit in :py:meth:`~ZstdDecompressor.decompress` method has been reached, and the decompressor has (or may has) unconsumed input data, it will be set to ``False``. In this case, pass ``b''`` to :py:meth:`~ZstdDecompressor.decompress` method may output further data. If ignore this attribute when there is unconsumed input data, there will be a little performance loss because of extra memory copy. .. py:attribute:: eof ``True`` means the end of the first frame has been reached. If decompress data after that, an ``EOFError`` exception will be raised. .. py:attribute:: unused_data A bytes object. When ZstdDecompressor object stops after decompressing a frame, unused input data after the first frame. Otherwise this will be ``b''``. .. sourcecode:: python # --- unlimited output --- d1 = ZstdDecompressor() decompressed_dat1 = d1.decompress(dat1) decompressed_dat2 = d1.decompress(dat2) decompressed_dat3 = d1.decompress(dat3) assert d1.eof, 'data is an incomplete zstd frame.' # --- limited output --- d2 = ZstdDecompressor() while True: if d2.needs_input: dat = read_input(2*1024*1024) # read 2 MiB input data if not dat: # input stream ends raise Exception('Input stream ends, but the end of ' 'the first frame is not reached.') else: # maybe there is unconsumed input data dat = b'' chunk = d2.decompress(dat, 10*1024*1024) # limit output buffer to 10 MiB write_output(chunk) if d2.eof: # reach the end of the first frame break .. py:class:: EndlessZstdDecompressor A streaming decompressor. It doesn't stop after a :ref:`frame` is decompressed, can be used to decompress multiple concatenated frames. Thread-safe at method level. .. py:method:: __init__(self, zstd_dict=None, option=None) The parameters are the same as :py:meth:`ZstdDecompressor.__init__` method. .. py:method:: decompress(self, data, max_length=-1) The parameters are the same as :py:meth:`ZstdDecompressor.decompress` method. After decompressing a frame, it doesn't stop like :py:meth:`ZstdDecompressor.decompress`. .. py:attribute:: needs_input It's the same as :py:attr:`ZstdDecompressor.needs_input`. .. py:attribute:: at_frame_edge ``True`` when both the input and output streams are at a :ref:`frame` edge, or the decompressor just be initialized. This flag could be used to check data integrity in some cases. .. sourcecode:: python # --- streaming decompression, unlimited output --- d1 = EndlessZstdDecompressor() decompressed_dat1 = d1.decompress(dat1) decompressed_dat2 = d1.decompress(dat2) decompressed_dat3 = d1.decompress(dat3) assert d1.at_frame_edge, 'data ends in an incomplete frame.' # --- streaming decompression, limited output --- d2 = EndlessZstdDecompressor() while True: if d2.needs_input: dat = read_input(2*1024*1024) # read 2 MiB input data if not dat: # input stream ends if not d2.at_frame_edge: raise Exception('data ends in an incomplete frame.') break else: # maybe there is unconsumed input data dat = b'' chunk = d2.decompress(dat, 10*1024*1024) # limit output buffer to 10 MiB write_output(chunk) .. hint:: Why :py:class:`EndlessZstdDecompressor` doesn't stop at frame edges? If so, unused input data after an edge will be copied to an internal buffer, this may be a performance overhead. If want to stop at frame edges, write a wrapper using :py:class:`ZstdDecompressor` class. And don't feed too much data every time, the overhead of copying unused input data to :py:attr:`ZstdDecompressor.unused_data` attribute still exists. .. _zstd_dict: Dictionary ---------- This section contains: * class :py:class:`ZstdDict` * function :py:func:`train_dict` * function :py:func:`finalize_dict` .. note:: If use pre-trained zstd dictionary, the compression ratio achievable on small data (a few KiB) improves dramatically. **Background** The smaller the amount of data to compress, the more difficult it is to compress. This problem is common to all compression algorithms, and reason is, compression algorithms learn from past data how to compress future data. But at the beginning of a new data set, there is no "past" to build upon. Zstd training mode can be used to tune the algorithm for a selected type of data. Training is achieved by providing it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression. See the FAQ in `this file `_ for details. .. attention:: #. If you lose a zstd dictionary, then can't decompress the corresponding data. #. Zstd dictionary has negligible effect on large data (multi-MiB) compression. If want to use large dictionary content, see prefix(:py:attr:`ZstdDict.as_prefix`). #. There is a possibility that the dictionary content could be maliciously tampered by a third party. **Advanced dictionary training** Pyzstd module only uses zstd library's stable API. The stable API only exposes two dictionary training functions that corresponding to :py:func:`train_dict` and :py:func:`finalize_dict`. If want to adjust advanced training parameters, you may use zstd's CLI program (not pyzstd module's CLI), it has entries to zstd library's experimental API. .. py:class:: ZstdDict Represents a zstd dictionary, can be used for compression/decompression. It's thread-safe, and can be shared by multiple :py:class:`ZstdCompressor` / :py:class:`ZstdDecompressor` objects. .. sourcecode:: python # load a zstd dictionary from file with io.open(dict_path, 'rb') as f: file_content = f.read() zd = ZstdDict(file_content) # use the dictionary to compress. # if use a dictionary for compressor multiple times, reusing # a compressor object is faster, see .as_undigested_dict doc. compressed_dat = compress(raw_dat, zstd_dict=zd) # use the dictionary to decompress decompressed_dat = decompress(compressed_dat, zstd_dict=zd) .. versionchanged:: 0.15.7 When compressing, load undigested dictionary instead of digested dictionary by default, see :py:attr:`~ZstdDict.as_digested_dict`. Also add ``.__len__()`` method that returning content size. .. py:method:: __init__(self, dict_content, is_raw=False) Initialize a ZstdDict object. :param dict_content: Dictionary's content. :type dict_content: bytes-like object :param is_raw: This parameter is for advanced user. ``True`` means *dict_content* argument is a "raw content" dictionary, free of any format restriction. ``False`` means *dict_content* argument is an ordinary zstd dictionary, was created by zstd functions, follow a specified format. :type is_raw: bool .. py:attribute:: dict_content The content of zstd dictionary, a ``bytes`` object. It's the same as *dict_content* argument in :py:meth:`~ZstdDict.__init__` method. It can be used with other programs. .. py:attribute:: dict_id ID of zstd dictionary, a 32-bit unsigned integer value. See :ref:`this note` for details. Non-zero means ordinary dictionary, was created by zstd functions, follow a specified format. ``0`` means a "raw content" dictionary, free of any format restriction, used for advanced user. (Note that the meaning of ``0`` is different from ``dictionary_id`` in :py:func:`get_frame_info` function.) .. py:attribute:: as_digested_dict Load as a digested dictionary, see below. .. versionadded:: 0.15.7 .. py:attribute:: as_undigested_dict Load as an undigested dictionary. Digesting dictionary is a costly operation. These two attributes can control how the dictionary is loaded to compressor, by passing them as `zstd_dict` argument: ``compress(dat, zstd_dict=zd.as_digested_dict)`` If don't specify these two attributes, use **undigested** dictionary for compression by default: ``compress(dat, zstd_dict=zd)`` .. list-table:: Difference for compression :widths: 12 12 12 :header-rows: 1 * - - | Digested | dictionary - | Undigested | dictionary * - | Some advanced | parameters of | compressor may | be overridden | by dictionary's | parameters - | ``windowLog``, ``hashLog``, | ``chainLog``, ``searchLog``, | ``minMatch``, ``targetLength``, | ``strategy``, | ``enableLongDistanceMatching``, | ``ldmHashLog``, ``ldmMinMatch``, | ``ldmBucketSizeLog``, | ``ldmHashRateLog``, and some | non-public parameters. - No * - | ZstdDict has | internal cache | for this - | Yes. It's faster when | loading again a digested | dictionary with the same | compression level. - | No. If load an undigested | dictionary multiple times, | consider reusing a | compressor object. For decompression, they have the same effect. Pyzstd uses **digested** dictionary for decompression by default, which is faster when loading again: ``decompress(dat, zstd_dict=zd)`` .. versionadded:: 0.15.7 .. py:attribute:: as_prefix Load the dictionary content to compressor/decompressor as a "prefix", by passing this attribute as `zstd_dict` argument: ``compress(dat, zstd_dict=zd.as_prefix)`` Prefix can be used for :ref:`patching engine` scenario. #. Prefix is compatible with "long distance matching", while dictionary is not. #. Prefix only work for the first frame, then the compressor/decompressor will return to no prefix state. This is different from dictionary that can be used for all subsequent frames. Therefore, be careful when using with ZstdFile/SeekableZstdFile. #. When decompressing, must use the same prefix as when compressing. #. Loading prefix to compressor is costly. #. Loading prefix to decompressor is not costly. .. versionadded:: 0.15.7 .. py:function:: train_dict(samples, dict_size) Train a zstd dictionary. See the FAQ in `this file `_ for details. :param samples: An iterable of samples, a sample is a bytes-like object represents a file. :type samples: iterable :param int dict_size: Returned zstd dictionary's **maximum** size, in bytes. :return: Trained zstd dictionary. If want to save the dictionary to a file, save the :py:attr:`ZstdDict.dict_content` attribute. :rtype: ZstdDict .. sourcecode:: python def samples(): rootdir = r"E:\data" # Note that the order of the files may be different, # therefore the generated dictionary may be different. for parent, dirnames, filenames in os.walk(rootdir): for filename in filenames: path = os.path.join(parent, filename) with io.open(path, 'rb') as f: dat = f.read() yield dat dic = pyzstd.train_dict(samples(), 100*1024) .. py:function:: finalize_dict(zstd_dict, samples, dict_size, level) Given a custom content as a basis for dictionary, and a set of samples, finalize dictionary by adding headers and statistics according to the zstd dictionary format. See the FAQ in `this file `_ for details. :param zstd_dict: A basis dictionary. :type zstd_dict: ZstdDict :param samples: An iterable of samples, a sample is a bytes-like object represents a file. :type samples: iterable :param int dict_size: Returned zstd dictionary's **maximum** size, in bytes. :param int level: The compression level expected to use in production. The statistics for each compression level differ, so tuning the dictionary for the compression level can help quite a bit. :return: Finalized zstd dictionary. If want to save the dictionary to a file, save the :py:attr:`ZstdDict.dict_content` attribute. :rtype: ZstdDict Module-level functions ---------------------- This section contains: * function :py:func:`get_frame_info`, get frame information from a frame header. * function :py:func:`get_frame_size`, get a frame's size. .. py:function:: get_frame_info(frame_buffer) Get zstd frame information from a frame header. Return a 2-item namedtuple: (decompressed_size, dictionary_id) If ``decompressed_size`` is ``None``, decompressed size is unknown. ``dictionary_id`` is a 32-bit unsigned integer value. ``0`` means dictionary ID was not recorded in frame header, the frame may or may not need a dictionary to be decoded, and the ID of such a dictionary is not specified. (Note that the meaning of ``0`` is different from :py:attr:`ZstdDict.dict_id` attribute.) It's possible to append more items to the namedtuple in the future. :param frame_buffer: It should starts from the beginning of a frame, and contains at least the frame header (6 to 18 bytes). :type frame_buffer: bytes-like object :return: Information about a frame. :rtype: namedtuple :raises ZstdError: When parsing the frame header fails. .. sourcecode:: python >>> pyzstd.get_frame_info(compressed_dat[:20]) frame_info(decompressed_size=687379, dictionary_id=1040992268) .. py:function:: get_frame_size(frame_buffer) Get the size of a zstd frame, including frame header and 4-byte checksum if it has. It will iterate all blocks' header within a frame, to accumulate the frame's size. :param frame_buffer: It should starts from the beginning of a frame, and contains at least one complete frame. :type frame_buffer: bytes-like object :return: The size of a zstd frame. :rtype: int :raises ZstdError: When it fails. .. sourcecode:: python >>> pyzstd.get_frame_size(compressed_dat) 252874 Module-level variables ---------------------- This section contains: * :py:data:`zstd_version`, a ``str``. * :py:data:`zstd_version_info`, a ``tuple``. * :py:data:`compressionLevel_values`, some values defined by the underlying zstd library. * :py:data:`zstd_support_multithread`, whether the underlying zstd library supports multi-threaded compression. .. py:data:: zstd_version Underlying zstd library's version, ``str`` form. .. sourcecode:: python >>> pyzstd.zstd_version '1.4.5' .. py:data:: zstd_version_info Underlying zstd library's version, ``tuple`` form. .. sourcecode:: python >>> pyzstd.zstd_version_info (1, 4, 5) .. py:data:: compressionLevel_values A 3-item namedtuple, values defined by the underlying zstd library, see :ref:`compression level` for details. ``default`` is default compression level, it is used when compression level is set to ``0`` or not set. ``min``/``max`` are minimum/maximum available values of compression level, both inclusive. .. sourcecode:: python >>> pyzstd.compressionLevel_values # 131072 = 128*1024 values(default=3, min=-131072, max=22) .. py:data:: zstd_support_multithread Whether the underlying zstd library was compiled with :ref:`multi-threaded compression` support. It's almost always ``True``. It's ``False`` when dynamically linked to zstd library that compiled without multi-threaded support. Ordinary users will not meet this situation. .. versionadded:: 0.15.1 .. sourcecode:: python >>> pyzstd.zstd_support_multithread True ZstdFile class and open() function ---------------------------------- This section contains: * class :py:class:`ZstdFile`, open a zstd-compressed file in binary mode. * function :py:func:`open`, open a zstd-compressed file in binary or text mode. .. py:class:: ZstdFile Open a zstd-compressed file in binary mode. This class is very similar to `bz2.BZ2File `_ / `gzip.GzipFile `_ / `lzma.LZMAFile `_ classes in Python standard library. But the performance is much better than them. Like BZ2File/GzipFile/LZMAFile classes, ZstdFile is not thread-safe, so if you need to use a single ZstdFile object from multiple threads, it is necessary to protect it with a lock. It can be used with Python's ``tarfile`` module, see :ref:`this note`. .. py:method:: __init__(self, filename, mode="r", *, level_or_option=None, zstd_dict=None) The *filename* argument can be an existing `file object `_ to wrap, or the name of the file to open (as a ``str``, ``bytes`` or `path-like `_ object). When wrapping an existing file object, the wrapped file will not be closed when the ZstdFile is closed. The *mode* argument can be either "r" for reading (default), "w" for overwriting, "x" for exclusive creation, or "a" for appending. These can equivalently be given as "rb", "wb", "xb" and "ab" respectively. In reading mode (decompression), these methods and statement are available: * `.read(size=-1) `_ * `.read1(size=-1) `_ * `.readinto(b) `_ * `.readinto1(b) `_ * `.readline(size=-1) `_ * `.seek(offset, whence=io.SEEK_SET) `_, note that if seek to a position before the current position, or seek to a position relative to the end of the file (the first time), the decompression has to be restarted from zero. If seek, consider using :py:class:`SeekableZstdFile` class. * `.peek(size=-1) `_ * `Iteration `_, yield lines, line terminator is ``b'\n'``. .. _write_methods: In writing modes (compression), these methods are available: * `.write(b) `_ * `.flush(mode=ZstdFile.FLUSH_BLOCK) `_, flush to the underlying stream: #. The *mode* argument can be ``ZstdFile.FLUSH_BLOCK``, ``ZstdFile.FLUSH_FRAME``. #. Contiguously invoking this method with ``.FLUSH_FRAME`` will not generate empty content frames. #. Abuse of this method will reduce compression ratio, use it only when necessary. #. If the program is interrupted afterwards, all data can be recovered. To ensure saving to disk, also need `os.fsync(fd) `_. (*Added in version 0.15.1, added mode argument in version 0.15.9.*) In both reading and writing modes, these methods and property are available: * `.close() `_ * `.tell() `_, return the current position of uncompressed content. In append mode, the initial position is 0. * `.fileno() `_ * `.closed `_ (a property attribute) * `.writable() `_ * `.readable() `_ * `.seekable() `_ .. py:function:: open(filename, mode="rb", *, level_or_option=None, zstd_dict=None, encoding=None, errors=None, newline=None) Open a zstd-compressed file in binary or text mode, returning a file object. This function is very similar to `bz2.open() `_ / `gzip.open() `_ / `lzma.open() `_ functions in Python standard library. The *filename* parameter can be an existing `file object `_ to wrap, or the name of the file to open (as a ``str``, ``bytes`` or `path-like `_ object). When wrapping an existing file object, the wrapped file will not be closed when the returned file object is closed. The *mode* parameter can be any of "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for binary mode, or "rt", "wt", "xt", or "at" for text mode. The default is "rb". If in reading mode (decompression), the *level_or_option* parameter can only be a ``dict`` object, that represents decompression option. It doesn't support ``int`` type compression level in this case. In binary mode, a :py:class:`ZstdFile` object is returned. In text mode, a :py:class:`ZstdFile` object is created, and wrapped in an `io.TextIOWrapper `_ object with the specified encoding, error handling behavior, and line ending(s). SeekableZstdFile class ---------------------- This section contains facilities that supporting `Zstandard Seekable Format `_: * exception :py:class:`SeekableFormatError` * class :py:class:`SeekableZstdFile` .. py:exception:: SeekableFormatError An error related to "Zstandard Seekable Format". Subclass of ``Exception``. .. versionadded:: 0.15.9 .. py:class:: SeekableZstdFile Subclass of :py:class:`ZstdFile`. This class can **only** create/write/read `Zstandard Seekable Format `_ file, or read 0-size file. It provides relatively fast seeking ability in read mode. Note that it doesn't verify/write the XXH64 checksum fields. Using :py:attr:`~CParameter.checksumFlag` is faster and more flexible. :py:class:`ZstdFile` class can also read "Zstandard Seekable Format" file, but no fast seeking ability. .. versionadded:: 0.15.9 .. py:method:: __init__(self, filename, mode="r", *, level_or_option=None, zstd_dict=None, max_frame_content_size=1024*1024*1024) Same as :py:meth:`ZstdFile.__init__`. Except in append mode (a, ab), *filename* argument can't be a file object, please use file path (str/bytes/PathLike form) in this mode. .. attention:: *max_frame_content_size* argument is used for compression modes (w, wb, a, ab, x, xb). When the uncompressed data length reaches *max_frame_content_size*, the current :ref:`frame` is closed automatically. The default value (1 GiB) is almost useless. User should set this value based on the data and seeking requirement. To retrieve a byte, need to decompress all data before this byte in that frame. So if the size is small, it will increase seeking speed, but reduce compression ratio. If the size is large, it will reduce seeking speed, but increase compression ratio. Avoid really tiny frame sizes (<1 KiB), that would hurt compression ratio considerably. You can also manually close a frame using :ref:`f.flush(mode=f.FLUSH_FRAME)`. .. py:staticmethod:: is_seekable_format_file(filename) This static method checks if a file is "Zstandard Seekable Format" file or 0-size file. It parses the seek table at the end of the file, returns ``True`` if no format error. :param filename: A file to be checked :type filename: File path (str/bytes/PathLike), or file object in reading mode. :return: Result :rtype: bool .. sourcecode:: python # Convert an existing zstd file to Zstandard Seekable Format file. # 10 MiB per frame. with ZstdFile(IN_FILE, 'r') as ifh: with SeekableZstdFile(OUT_FILE, 'w', max_frame_content_size=10*1024*1024) as ofh: while True: dat = ifh.read(30*1024*1024) if not dat: break ofh.write(dat) # return True SeekableZstdFile.is_seekable_format_file(OUT_FILE) Advanced parameters ------------------- This section contains class :py:class:`CParameter`, :py:class:`DParameter`, :py:class:`Strategy`, they are subclasses of ``IntEnum``, used for setting advanced parameters. Attributes of :py:class:`CParameter` class: - Compression level (:py:attr:`~CParameter.compressionLevel`) - Compress algorithm parameters (:py:attr:`~CParameter.windowLog`, :py:attr:`~CParameter.hashLog`, :py:attr:`~CParameter.chainLog`, :py:attr:`~CParameter.searchLog`, :py:attr:`~CParameter.minMatch`, :py:attr:`~CParameter.targetLength`, :py:attr:`~CParameter.strategy`, :py:attr:`~CParameter.targetCBlockSize`) - Long distance matching (:py:attr:`~CParameter.enableLongDistanceMatching`, :py:attr:`~CParameter.ldmHashLog`, :py:attr:`~CParameter.ldmMinMatch`, :py:attr:`~CParameter.ldmBucketSizeLog`, :py:attr:`~CParameter.ldmHashRateLog`) - Misc (:py:attr:`~CParameter.contentSizeFlag`, :py:attr:`~CParameter.checksumFlag`, :py:attr:`~CParameter.dictIDFlag`) - Multi-threaded compression (:py:attr:`~CParameter.nbWorkers`, :py:attr:`~CParameter.jobSize`, :py:attr:`~CParameter.overlapLog`) Attribute of :py:class:`DParameter` class: - Decompression parameter (:py:attr:`~DParameter.windowLogMax`) Attributes of :py:class:`Strategy` class: :py:attr:`~Strategy.fast`, :py:attr:`~Strategy.dfast`, :py:attr:`~Strategy.greedy`, :py:attr:`~Strategy.lazy`, :py:attr:`~Strategy.lazy2`, :py:attr:`~Strategy.btlazy2`, :py:attr:`~Strategy.btopt`, :py:attr:`~Strategy.btultra`, :py:attr:`~Strategy.btultra2`. .. _CParameter: .. py:class:: CParameter(IntEnum) Advanced compression parameters. When using, put the parameters in a ``dict`` object, the key is a :py:class:`CParameter` name, the value is a 32-bit signed integer value. .. sourcecode:: python option = {CParameter.compressionLevel : 10, CParameter.checksumFlag : 1} # used with compress() function compressed_dat = compress(raw_dat, option) # used with ZstdCompressor object c = ZstdCompressor(level_or_option=option) compressed_dat1 = c.compress(raw_dat) compressed_dat2 = c.flush() Parameter value should belong to an interval with lower and upper bounds, otherwise they will either trigger an error or be clamped silently. The constant values mentioned below are defined in `zstd.h `_, note that these values may be different in different zstd versions. .. py:method:: bounds(self) Return lower and upper bounds of a parameter, both inclusive. .. sourcecode:: python >>> CParameter.compressionLevel.bounds() (-131072, 22) >>> CParameter.windowLog.bounds() (10, 31) >>> CParameter.enableLongDistanceMatching.bounds() (0, 1) .. py:attribute:: compressionLevel Set compression parameters according to pre-defined compressionLevel table, see :ref:`compression level` for details. Setting a compression level does not set all other compression parameters to default. Setting this will dynamically impact the compression parameters which have not been manually set, the manually set ones will "stick". .. py:attribute:: windowLog Maximum allowed back-reference distance, expressed as power of 2, ``1 << windowLog`` bytes. Larger values requiring more memory and typically compressing more. This will set a memory budget for streaming decompression. Using a value greater than ``ZSTD_WINDOWLOG_LIMIT_DEFAULT`` requires explicitly allowing such size at streaming decompression stage, see :py:attr:`DParameter.windowLogMax`. ``ZSTD_WINDOWLOG_LIMIT_DEFAULT`` is 27 in zstd v1.2+, means 128 MiB (1 << 27). Must be clamped between ``ZSTD_WINDOWLOG_MIN`` and ``ZSTD_WINDOWLOG_MAX``. Special: value ``0`` means "use default windowLog", then the value is dynamically set, see "W" column in `this table `_. .. py:attribute:: hashLog Size of the initial probe table, as a power of 2, resulting memory usage is ``1 << (hashLog+2)`` bytes. Must be clamped between ``ZSTD_HASHLOG_MIN`` and ``ZSTD_HASHLOG_MAX``. Larger tables improve compression ratio of strategies <= :py:attr:`~Strategy.dfast`, and improve speed of strategies > :py:attr:`~Strategy.dfast`. Special: value ``0`` means "use default hashLog", then the value is dynamically set, see "H" column in `this table `_. .. py:attribute:: chainLog Size of the multi-probe search table, as a power of 2, resulting memory usage is ``1 << (chainLog+2)`` bytes. Must be clamped between ``ZSTD_CHAINLOG_MIN`` and ``ZSTD_CHAINLOG_MAX``. Larger tables result in better and slower compression. This parameter is useless for :py:attr:`~Strategy.fast` strategy. It's still useful when using :py:attr:`~Strategy.dfast` strategy, in which case it defines a secondary probe table. Special: value ``0`` means "use default chainLog", then the value is dynamically set, see "C" column in `this table `_. .. py:attribute:: searchLog Number of search attempts, as a power of 2. More attempts result in better and slower compression. This parameter is useless for :py:attr:`~Strategy.fast` and :py:attr:`~Strategy.dfast` strategies. Special: value ``0`` means "use default searchLog", then the value is dynamically set, see "S" column in `this table `_. .. py:attribute:: minMatch Minimum size of searched matches. Note that Zstandard can still find matches of smaller size, it just tweaks its search algorithm to look for this size and larger. Larger values increase compression and decompression speed, but decrease ratio. Must be clamped between ``ZSTD_MINMATCH_MIN`` and ``ZSTD_MINMATCH_MAX``. Note that currently, for all strategies < :py:attr:`~Strategy.btopt`, effective minimum is ``4``, for all strategies > :py:attr:`~Strategy.fast`, effective maximum is ``6``. Special: value ``0`` means "use default minMatchLength", then the value is dynamically set, see "L" column in `this table `_. .. py:attribute:: targetLength Impact of this field depends on strategy. For strategies :py:attr:`~Strategy.btopt`, :py:attr:`~Strategy.btultra` & :py:attr:`~Strategy.btultra2`: Length of Match considered "good enough" to stop search. Larger values make compression stronger, and slower. For strategy :py:attr:`~Strategy.fast`: Distance between match sampling. Larger values make compression faster, and weaker. Special: value ``0`` means "use default targetLength", then the value is dynamically set, see "TL" column in `this table `_. .. py:attribute:: strategy See :py:attr:`Strategy` class definition. The higher the value of selected strategy, the more complex it is, resulting in stronger and slower compression. Special: value ``0`` means "use default strategy", then the value is dynamically set, see "strat" column in `this table `_. .. py:attribute:: targetCBlockSize Attempts to fit compressed block size into approximately targetCBlockSize (in bytes). Note that it's not a guarantee, just a convergence target. This is helpful in low bandwidth streaming environments to improve end-to-end latency, when a client can make use of partial documents. Bound by ZSTD_TARGETCBLOCKSIZE_MIN and ZSTD_TARGETCBLOCKSIZE_MAX. No target when targetCBlockSize == 0. Default value is ``0``. Only available for zstd v1.5.6+. .. py:attribute:: enableLongDistanceMatching Enable long distance matching. Default value is ``0``, can be ``1``. This parameter is designed to improve compression ratio, for large inputs, by finding large matches at long distance. It increases memory usage and window size. Note: * Enabling this parameter increases default :py:attr:`~CParameter.windowLog` to 128 MiB except when expressly set to a different value. * This will be enabled by default if :py:attr:`~CParameter.windowLog` >= 128 MiB and compression strategy >= :py:attr:`~Strategy.btopt` (compression level 16+). .. py:attribute:: ldmHashLog Size of the table for long distance matching, as a power of 2. Larger values increase memory usage and compression ratio, but decrease compression speed. Must be clamped between ``ZSTD_HASHLOG_MIN`` and ``ZSTD_HASHLOG_MAX``, default: :py:attr:`~CParameter.windowLog` - 7. Special: value ``0`` means "automatically determine hashlog". .. py:attribute:: ldmMinMatch Minimum match size for long distance matcher. Larger/too small values usually decrease compression ratio. Must be clamped between ``ZSTD_LDM_MINMATCH_MIN`` and ``ZSTD_LDM_MINMATCH_MAX``. Special: value ``0`` means "use default value" (default: 64). .. py:attribute:: ldmBucketSizeLog Log size of each bucket in the LDM hash table for collision resolution. Larger values improve collision resolution but decrease compression speed. The maximum value is ``ZSTD_LDM_BUCKETSIZELOG_MAX``. Special: value ``0`` means "use default value" (default: 3). .. py:attribute:: ldmHashRateLog Frequency of inserting/looking up entries into the LDM hash table. Must be clamped between 0 and ``(ZSTD_WINDOWLOG_MAX - ZSTD_HASHLOG_MIN)``. Default is MAX(0, (:py:attr:`~CParameter.windowLog` - :py:attr:`~CParameter.ldmHashLog`)), optimizing hash table usage. Larger values improve compression speed. Deviating far from default value will likely result in a compression ratio decrease. Special: value ``0`` means "automatically determine hashRateLog". .. _content_size: .. py:attribute:: contentSizeFlag Uncompressed content size will be written into frame header whenever known. Default value is ``1``, can be ``0``. In traditional streaming compression, content size is unknown. In these compressions, the content size is known: * :py:func:`compress` function * :py:class:`ZstdCompressor` class using a single :py:attr:`~ZstdCompressor.FLUSH_FRAME` mode The field in frame header is 1/2/4/8 bytes, depending on size value. It may help decompression code to allocate output buffer faster. \* :py:class:`ZstdCompressor` has an undocumented method to set the size, ``help(ZstdCompressor._set_pledged_input_size)`` to see the usage. .. py:attribute:: checksumFlag A 4-byte checksum (XXH64) of uncompressed content is written at the end of frame. Default value is ``0``, can be ``1``. Zstd's decompression code verifies it. If checksum mismatch, raises a :py:class:`ZstdError` exception, with a message like "Restored data doesn't match checksum". .. py:attribute:: dictIDFlag When applicable, dictionary's ID is written into frame header. See :ref:`this note` for details. Default value is ``1``, can be ``0``. .. py:attribute:: nbWorkers Select how many threads will be spawned to compress in parallel. When nbWorkers >= ``1``, enables multi-threaded compression, ``1`` means "1-thread multi-threaded mode". See :ref:`zstd multi-threaded compression` for details. More workers improve speed, but also increase memory usage. ``0`` (default) means "single-threaded mode", no worker is spawned, compression is performed inside caller's thread. .. versionchanged:: 0.15.1 Setting to ``1`` means "1-thread multi-threaded mode", instead of "single-threaded mode". .. py:attribute:: jobSize Size of a compression job, in bytes. This value is enforced only when :py:attr:`~CParameter.nbWorkers` >= 1. Each compression job is completed in parallel, so this value can indirectly impact the number of active threads. ``0`` means default, which is dynamically determined based on compression parameters. Non-zero value will be silently clamped to: * minimum value: ``max(overlap_size, 512_KiB)``. overlap_size is specified by :py:attr:`~CParameter.overlapLog` parameter. * maximum value: ``512_MiB if 32_bit_build else 1024_MiB``. .. py:attribute:: overlapLog Control the overlap size, as a fraction of window size. (The "window size" here is not strict :py:attr:`~CParameter.windowLog`, see zstd source code.) This value is enforced only when :py:attr:`~CParameter.nbWorkers` >= 1. The overlap size is an amount of data reloaded from previous job at the beginning of a new job. It helps preserve compression ratio, while each job is compressed in parallel. Larger values increase compression ratio, but decrease speed. Possible values range from 0 to 9: - 0 means "default" : The value will be determined by the library. The value varies between 6 and 9, depending on :py:attr:`~CParameter.strategy`. - 1 means "no overlap" - 9 means "full overlap", using a full window size. Each intermediate rank increases/decreases load size by a factor 2: 9: full window; 8: w/2; 7: w/4; 6: w/8; 5: w/16; 4: w/32; 3: w/64; 2: w/128; 1: no overlap; 0: default. .. _DParameter: .. py:class:: DParameter(IntEnum) Advanced decompression parameters. When using, put the parameters in a ``dict`` object, the key is a :py:class:`DParameter` name, the value is a 32-bit signed integer value. .. sourcecode:: python # set memory allocation limit to 16 MiB (1 << 24) option = {DParameter.windowLogMax : 24} # used with decompress() function decompressed_dat = decompress(dat, option=option) # used with ZstdDecompressor object d = ZstdDecompressor(option=option) decompressed_dat = d.decompress(dat) Parameter value should belong to an interval with lower and upper bounds, otherwise they will either trigger an error or be clamped silently. The constant values mentioned below are defined in `zstd.h `_, note that these values may be different in different zstd versions. .. py:method:: bounds(self) Return lower and upper bounds of a parameter, both inclusive. .. sourcecode:: python >>> DParameter.windowLogMax.bounds() (10, 31) .. py:attribute:: windowLogMax Select a size limit (in power of 2) beyond which the streaming API will refuse to allocate memory buffer in order to protect the host from unreasonable memory requirements. If a :ref:`frame` requires more memory than the set value, raises a :py:class:`ZstdError` exception, with a message like "Frame requires too much memory for decoding". This parameter is only useful in streaming mode, since no internal buffer is allocated in single-pass mode. :py:func:`decompress` function may use streaming mode or single-pass mode. By default, a decompression context accepts window sizes <= ``(1 << ZSTD_WINDOWLOG_LIMIT_DEFAULT)``, the constant is ``27`` in zstd v1.2+, means 128 MiB (1 << 27). If frame requested window size is greater than this value, need to explicitly set this parameter. Special: value ``0`` means "use default maximum windowLog". .. py:class:: Strategy(IntEnum) Used for :py:attr:`CParameter.strategy`. Compression strategies, listed from fastest to strongest. Note : new strategies **might** be added in the future, only the order (from fast to strong) is guaranteed. .. py:attribute:: fast .. py:attribute:: dfast .. py:attribute:: greedy .. py:attribute:: lazy .. py:attribute:: lazy2 .. py:attribute:: btlazy2 .. py:attribute:: btopt .. py:attribute:: btultra .. py:attribute:: btultra2 .. sourcecode:: python option = {CParameter.strategy : Strategy.lazy2, CParameter.checksumFlag : 1} compressed_dat = compress(raw_dat, option) Informative notes ----------------- Compression level >>>>>>>>>>>>>>>>> .. _compression_level: .. note:: Compression level Compression level is an integer: * ``1`` to ``22`` (currently), regular levels. Levels >= 20, labeled *ultra*, should be used with caution, as they require more memory. * ``0`` means use the default level, which is currently ``3`` defined by the underlying zstd library. * ``-131072`` to ``-1``, negative levels extend the range of speed vs ratio preferences. The lower the level, the faster the speed, but at the cost of compression ratio. 131072 = 128*1024. :py:data:`compressionLevel_values` are some values defined by the underlying zstd library. **For advanced user** Compression levels are just numbers that map to a set of compression parameters, see `this table `_ for overview. The parameters may be adjusted by the underlying zstd library after gathering some information, such as data size, using dictionary or not. Setting a compression level does not set all other :ref:`compression parameters` to default. Setting this will dynamically impact the compression parameters which have not been manually set, the manually set ones will "stick". Frame and block >>>>>>>>>>>>>>> .. _frame_block: .. note:: Frame and block **Frame** Zstd data consists of one or more independent "frames". The decompressed content of multiple concatenated frames is the concatenation of each frame decompressed content. A frame is completely independent, has a frame header, and a set of parameters which tells the decoder how to decompress it. In addition to normal frame, there is `skippable frame `_ that can contain any user-defined data, skippable frame will be decompressed to ``b''``. **Block** A frame encapsulates one or multiple "blocks". Block has a guaranteed maximum size (3 bytes block header + 128 KiB), the actual maximum size depends on frame parameters. Unlike independent frames, each block depends on previous blocks for proper decoding, but doesn't need the following blocks, a complete block can be fully decompressed. So flushing block may be used in communication scenarios, see :py:attr:`ZstdCompressor.FLUSH_BLOCK`. .. attention:: In some `language bindings `_, decompress() function doesn't support multiple frames, or/and doesn't support a frame with unknown :ref:`content size`, pay attention when compressing data for other language bindings. Multi-threaded compression >>>>>>>>>>>>>>>>>>>>>>>>>> .. _mt_compression: .. note:: Multi-threaded compression Zstd library supports multi-threaded compression. Set :py:attr:`CParameter.nbWorkers` parameter >= ``1`` to enable multi-threaded compression, ``1`` means "1-thread multi-threaded mode". The threads are spawned by the underlying zstd library, not by pyzstd module. .. sourcecode:: python # use 4 threads to compress option = {CParameter.nbWorkers : 4} compressed_dat = compress(raw_dat, option) The data will be split into portions and compressed in parallel. The portion size can be specified by :py:attr:`CParameter.jobSize` parameter, the overlap size can be specified by :py:attr:`CParameter.overlapLog` parameter, usually don't need to set these. The multi-threaded output will be different than the single-threaded output. However, both are deterministic, and the multi-threaded output produces the same compressed data no matter how many threads used. The multi-threaded output is a single :ref:`frame`, it's larger a little. Compressing a 520.58 MiB data, single-threaded output is 273.55 MiB, multi-threaded output is 274.33 MiB. .. hint:: Using "CPU physical cores number" as threads number may be the fastest, to get the number need to install third-party module. `os.cpu_count() `_ can only get "CPU logical cores number" (hyper-threading capability). Use with tarfile module >>>>>>>>>>>>>>>>>>>>>>> .. _with_tarfile: .. note:: Use with tarfile module Python's `tarfile `_ module supports arbitrary compression algorithms by providing a file object. .. sourcecode:: python import tarfile # compression with ZstdFile('archive.tar.zst', mode='w') as _fileobj, tarfile.open(fileobj=_fileobj, mode='w') as tar: # do something # decompression with ZstdFile('archive.tar.zst', mode='r') as _fileobj, tarfile.open(fileobj=_fileobj) as tar: # do something Alternatively, it is possible to extend the ``Tarfile`` class, so that it supports decompressing ``.tar.zst`` file automatically, as well as adding the following modes: ``r:zst``, ``w:zst`` and ``x:zst``. .. sourcecode:: python from tarfile import TarFile, CompressionError, ReadError from pyzstd import ZstdFile, ZstdError class CustomTarFile(TarFile): OPEN_METH = { **TarFile.OPEN_METH, 'zst': 'zstopen' } @classmethod def zstopen(cls, name, mode='r', fileobj=None, level_or_option=None, zstd_dict=None, **kwargs): """Open zstd compressed tar archive name for reading or writing. Appending is not allowed. """ if mode not in ('r', 'w', 'x'): raise ValueError("mode must be 'r', 'w' or 'x'") fileobj = ZstdFile(fileobj or name, mode, level_or_option=level_or_option, zstd_dict=zstd_dict) try: tar = cls.taropen(name, mode, fileobj, **kwargs) except (ZstdError, EOFError) as exception: fileobj.close() if mode == 'r': raise ReadError('not a zstd file') from exception raise except: fileobj.close() raise tar._extfileobj = False return tar # compression with CustomTarFile.open('archive.tar.zst', mode='w:zst') as tar: # do something # decompression with CustomTarFile.open('archive.tar.zst') as tar: # do something In both implementations, when selectively reading files multiple times, it may seek to a position before the current position; then the decompression has to be restarted from zero. If this slows down the operations, you can: #. Use :py:class:`SeekableZstdFile` class to create/read .tar.zst file. #. Decompress the archive to a temporary file, and read from it. This code encapsulates the process: .. sourcecode:: python import contextlib import io import shutil import tarfile import tempfile import pyzstd @contextlib.contextmanager def ZstdTarReader(name, *, zstd_dict=None, level_or_option=None, **kwargs): with tempfile.TemporaryFile() as tmp_file: with pyzstd.open(name, level_or_option=level_or_option, zstd_dict=zstd_dict) as ifh: shutil.copyfile(ifh, tmp_file) tmp_file.seek(0) with tarfile.TarFile(fileobj=tmp_file, **kwargs) as tar: yield tar with ZstdTarReader('archive.tar.zst') as tar: # do something Zstd dictionary ID >>>>>>>>>>>>>>>>>> .. _dict_id: .. note:: Zstd dictionary ID Dictionary ID is a 32-bit unsigned integer value. Decoder uses it to check if the correct dictionary is used. According to zstd dictionary format `specification `_, if a dictionary is going to be distributed in public, the following ranges are reserved for future registrar and shall not be used: - low range: <= 32767 - high range: >= 2^31 Outside of these ranges, any value in (32767 < v < 2^31) can be used freely, even in public environment. In zstd frame header, the `Dictionary_ID `_ field can be 0/1/2/4 bytes. If the value is small, this can save 2~3 bytes. Or don't write the ID by setting :py:attr:`CParameter.dictIDFlag` parameter. pyzstd module doesn't support specifying ID when training dictionary currently. If want to specify the ID, modify the dictionary content according to format specification, and take the corresponding risks. **Attention** In :py:class:`ZstdDict` class, :py:attr:`ZstdDict.dict_id` attribute == 0 means the dictionary is a "raw content" dictionary, free of any format restriction, used for advanced user. Non-zero means it's an ordinary dictionary, was created by zstd functions, follow the format specification. In :py:func:`get_frame_info` function, ``dictionary_id`` == 0 means dictionary ID was not recorded in the frame header, the frame may or may not need a dictionary to be decoded, and the ID of such a dictionary is not specified. Use zstd as a patching engine >>>>>>>>>>>>>>>>>>>>>>>>>>>>> .. _patching_engine: .. note:: Use zstd as a patching engine Zstd can be used as a great `patching engine `_, although it has some limitations. In this particular scenario, pass :py:attr:`ZstdDict.as_prefix` attribute as `zstd_dict` argument. "Prefix" is similar to "raw content" dictionary, but zstd internally handles them differently, see `this issue `_. Essentially, prefix is like being placed before the data to be compressed. See "ZSTD_c_deterministicRefPrefix" in `this file `_. 1, Generating a patch (compress) Assuming VER_1 and VER_2 are two versions. Let the "window" cover the longest version, by setting :py:attr:`CParameter.windowLog`. And enable "long distance matching" by setting :py:attr:`CParameter.enableLongDistanceMatching` to 1. The ``--patch-from`` option of zstd CLI also uses other parameters, but these two matter the most. The valid value of `windowLog` is [10,30] in 32-bit build, [10,31] in 64-bit build. So in 64-bit build, it has a `2GiB length limit `_. Strictly speaking, the limit is (2GiB - ~100KiB). When this limit is exceeded, the patch becomes very large and loses the meaning of a patch. .. sourcecode:: python # use VER_1 as prefix v1 = ZstdDict(VER_1, is_raw=True) # let the window cover the longest version. # don't forget to clamp windowLog to valid range. # enable "long distance matching". windowLog = max(len(VER_1), len(VER_2)).bit_length() option = {CParameter.windowLog: windowLog, CParameter.enableLongDistanceMatching: 1} # get a small PATCH PATCH = compress(VER_2, level_or_option=option, zstd_dict=v1.as_prefix) 2, Applying the patch (decompress) Prefix is not dictionary, so the frame header doesn't record a :ref:`dictionary id`. When decompressing, must use the same prefix as when compressing. Otherwise ZstdError exception may be raised with a message like "Data corruption detected". Decompressing requires a window of the same size as when compressing, this may be a problem for small RAM device. If the window is larger than 128MiB, need to explicitly set :py:attr:`DParameter.windowLogMax` to allow larger window. .. sourcecode:: python # use VER_1 as prefix v1 = ZstdDict(VER_1, is_raw=True) # allow large window, the actual windowLog is from frame header. option = {DParameter.windowLogMax: 31} # get VER_2 from (VER_1 + PATCH) VER_2 = decompress(PATCH, zstd_dict=v1.as_prefix, option=option) Deprecations >>>>>>>>>>>> See `list of deprecations with alternatives <./deprecated.html>`_. Also, note that `unsupported Python versions `_ are not tested against and have no wheels uploaded on PyPI. pyzstd-0.19.1/docs/requirements.txt0000644000000000000000000000003513615410400014271 0ustar00myst-parser sphinx_rtd_theme pyzstd-0.19.1/docs/stdlib.md0000644000000000000000000001104413615410400012612 0ustar00# Migrating to the standard library In Python 3.14, [the `compression.zstd` module](https://docs.python.org/3.14/library/compression.zstd.html) is available to support Zstandard natively. This guide was written to highlight the main differences and help with the migration. _Note that to support Python versions before 3.14, you will need to install [the `backports.zstd` library](https://github.com/Rogdham/backports.zstd), created by the maintainer of `pyzstd`._ The examples in this guide assume the following imports: ```python import pyzstd import sys if sys.version_info >= (3, 14): from compression import zstd else: from backports import zstd ``` ## `level_or_option` parameter In `pyzstd`, the `level_or_option` parameter could accept either a compression level (as an integer) or a dictionary of options. In the standard library, this is split into two distinct parameters: `level` and `options`. Only one can be used at a time. ```python # before pyzstd.compress(data, 10) pyzstd.compress(data, level_or_option=10) # after zstd.compress(data, 10) zstd.compress(data, level=10) ``` ```python # before pyzstd.compress(data, {pyzstd.CParameter.checksumFlag: True}) pyzstd.compress(data, level_or_option={pyzstd.CParameter.checksumFlag: True}) # after zstd.compress(data, options={zstd.CompressionParameter.checksum_flag: True}) ``` ## `CParameter` and `DParameter` The `CParameter` and `DParameter` classes have been renamed to `CompressionParameter` and `DecompressionParameter` respectively. Additionally, attribute names now use snake_case instead of camelCase. ```python # before pyzstd.CParameter.enableLongDistanceMatching pyzstd.DParameter.windowLogMax # after zstd.CompressionParameter.enable_long_distance_matching zstd.DecompressionParameter.window_log_max ``` Finally, the `CParameter.targetCBlockSize` parameter is not available for now. Assuming a version of libzstd supporting it is used at runtime (1.5.6 or later), the integer `130` can be used as a key in the dictionary passed to the `options` parameter. ## `ZstdFile`'s `filename` parameter The first parameter of `ZstdFile` (`filename`) is now positional-only. ```python # before pyzstd.ZstdFile("file.zst") pyzstd.ZstdFile(fileobj) pyzstd.ZstdFile(filename="file.zst") pyzstd.ZstdFile(filename=fileobj) # after zstd.ZstdFile("file.zst") zstd.ZstdFile(fileobj) ``` ## `ZstdCompressor._set_pledged_input_size` The method `_set_pledged_input_size` of the `ZstdCompressor` class has been renamed to `set_pledged_input_size`. ## `EndlessZstdDecompressor` The `EndlessZstdDecompressor` class is not available. Here are possible alternatives: - Chain multiple `ZstdDecompressor` instances manually. - Include [this code snippet](https://gist.github.com/Rogdham/e2d694cee709e75240a1fd5278e99666#file-endless_zstd_decompressor-py) in your codebase. - Use the `decompress` function if the data is small enough. - Use a file-like interface via `ZstdFile`. ## `RichMemZstdCompressor` and `richmem_compress` The `RichMemZstdCompressor` class and `richmem_compress` function, which are deprecated in `pyzstd`, are not available. Use `compress` instead ([more details](./deprecated.md#richmem-compress)). ## `compress_stream` and `decompress_stream` The `compress_stream` and `decompress_stream` functions, which are deprecated in `pyzstd`, are not available. See [alternatives](./deprecated.md#compress-stream). ## `compressionLevel_values` The constant `compressionLevel_values` namedtuple is not available. Use the following alternatives: - `zstd.COMPRESSION_LEVEL_DEFAULT` for the default compression level. - `zstd.CompressionParameter.compression_level.bounds()` for the minimum and maximum compression levels. ## Exceptions raised The messages of raised exceptions are not always the same. When they are due to an error in the parameters used by the caller, the type of exceptions may change as well. ```python # before >>> pyzstd.compress(b'', {999:9999}) pyzstd.ZstdError: Zstd compression parameter "unknown parameter (key 999)" is invalid. (zstd v1.5.7) # after >>> zstd.compress(b'', options={999:9999}) ValueError: invalid compression parameter 'unknown parameter (key 999)' ``` ## `ZstdFile` The `read_size` and `write_size` parameters of `ZstdFile` are not available. ## `ZstdDict` The `is_raw` parameter of `ZstdDict` is no longer positional. Call it by its name instead. ```python # before pyzstd.ZstdDict(data, True) # after zstd.ZstdDict(data, is_raw=True) ``` ## `SeekableZstdFile` Support for the Zstandard seekable format is not available. Continue using `pyzstd` for now if the feature is required. pyzstd-0.19.1/src/pyzstd/__init__.py0000644000000000000000000007240513615410400014324 0ustar00from collections.abc import Callable, Mapping from enum import IntEnum from io import TextIOWrapper from os import PathLike import sys from typing import ( BinaryIO, ClassVar, Literal, NamedTuple, NoReturn, TypeAlias, cast, overload, ) import warnings if sys.version_info < (3, 14): from backports import zstd else: from compression import zstd if sys.version_info < (3, 13): from typing_extensions import deprecated else: from warnings import deprecated if sys.version_info < (3, 12): from typing_extensions import Buffer else: from collections.abc import Buffer from pyzstd._version import __version__ # noqa: F401 __doc__ = """\ Python bindings to Zstandard (zstd) compression library, the API style is similar to Python's bz2/lzma/zlib modules. Command line interface of this module: python -m pyzstd --help Documentation: https://pyzstd.readthedocs.io GitHub: https://github.com/Rogdham/pyzstd PyPI: https://pypi.org/project/pyzstd""" __all__ = ( "CParameter", "DParameter", "EndlessZstdDecompressor", "RichMemZstdCompressor", "SeekableFormatError", "SeekableZstdFile", "Strategy", "ZstdCompressor", "ZstdDecompressor", "ZstdDict", "ZstdError", "ZstdFile", "compress", "compress_stream", "compressionLevel_values", "decompress", "decompress_stream", "finalize_dict", "get_frame_info", "get_frame_size", "open", "richmem_compress", "train_dict", "zstd_support_multithread", "zstd_version", "zstd_version_info", ) class _DeprecatedPlaceholder: def __repr__(self) -> str: return "" _DEPRECATED_PLACEHOLDER = _DeprecatedPlaceholder() Strategy = zstd.Strategy ZstdError = zstd.ZstdError ZstdDict = zstd.ZstdDict train_dict = zstd.train_dict finalize_dict = zstd.finalize_dict get_frame_info = zstd.get_frame_info get_frame_size = zstd.get_frame_size zstd_version = zstd.zstd_version zstd_version_info = zstd.zstd_version_info class CParameter(IntEnum): """Compression parameters""" compressionLevel = zstd.CompressionParameter.compression_level # noqa: N815 windowLog = zstd.CompressionParameter.window_log # noqa: N815 hashLog = zstd.CompressionParameter.hash_log # noqa: N815 chainLog = zstd.CompressionParameter.chain_log # noqa: N815 searchLog = zstd.CompressionParameter.search_log # noqa: N815 minMatch = zstd.CompressionParameter.min_match # noqa: N815 targetLength = zstd.CompressionParameter.target_length # noqa: N815 strategy = zstd.CompressionParameter.strategy targetCBlockSize = 130 # not part of PEP-784 # noqa: N815 enableLongDistanceMatching = zstd.CompressionParameter.enable_long_distance_matching # noqa: N815 ldmHashLog = zstd.CompressionParameter.ldm_hash_log # noqa: N815 ldmMinMatch = zstd.CompressionParameter.ldm_min_match # noqa: N815 ldmBucketSizeLog = zstd.CompressionParameter.ldm_bucket_size_log # noqa: N815 ldmHashRateLog = zstd.CompressionParameter.ldm_hash_rate_log # noqa: N815 contentSizeFlag = zstd.CompressionParameter.content_size_flag # noqa: N815 checksumFlag = zstd.CompressionParameter.checksum_flag # noqa: N815 dictIDFlag = zstd.CompressionParameter.dict_id_flag # noqa: N815 nbWorkers = zstd.CompressionParameter.nb_workers # noqa: N815 jobSize = zstd.CompressionParameter.job_size # noqa: N815 overlapLog = zstd.CompressionParameter.overlap_log # noqa: N815 def bounds(self) -> tuple[int, int]: """Return lower and upper bounds of a compression parameter, both inclusive.""" return zstd.CompressionParameter(self).bounds() class DParameter(IntEnum): """Decompression parameters""" windowLogMax = zstd.DecompressionParameter.window_log_max # noqa: N815 def bounds(self) -> tuple[int, int]: """Return lower and upper bounds of a decompression parameter, both inclusive.""" return zstd.DecompressionParameter(self).bounds() _LevelOrOption: TypeAlias = int | Mapping[int, int] | None _Option: TypeAlias = Mapping[int, int] | None _ZstdDict: TypeAlias = ZstdDict | tuple[ZstdDict, int] | None _StrOrBytesPath: TypeAlias = str | bytes | PathLike[str] | PathLike[bytes] def _convert_level_or_option( level_or_option: _LevelOrOption | _Option, mode: str ) -> Mapping[int, int] | None: """Transform pyzstd params into PEP-784 `options` param""" if not isinstance(mode, str): raise TypeError(f"Invalid mode type: {mode}") read_mode = mode.startswith("r") if isinstance(level_or_option, int): if read_mode: raise TypeError( "In read mode (decompression), level_or_option argument " "should be a dict object, that represents decompression " "option. It doesn't support int type compression level " "in this case." ) return { CParameter.compressionLevel: level_or_option, } if level_or_option is not None: invalid_class = CParameter if read_mode else DParameter for key in level_or_option: if isinstance(key, invalid_class): raise TypeError( "Key of compression option dict should " f"NOT be {invalid_class.__name__}." ) return level_or_option class ZstdCompressor: """A streaming compressor. Thread-safe at method level.""" CONTINUE: ClassVar[Literal[0]] = zstd.ZstdCompressor.CONTINUE """Used for mode parameter in .compress() method. Collect more data, encoder decides when to output compressed result, for optimal compression ratio. Usually used for traditional streaming compression. """ FLUSH_BLOCK: ClassVar[Literal[1]] = zstd.ZstdCompressor.FLUSH_BLOCK """Used for mode parameter in .compress(), .flush() methods. Flush any remaining data, but don't close the current frame. Usually used for communication scenarios. If there is data, it creates at least one new block, that can be decoded immediately on reception. If no remaining data, no block is created, return b''. Note: Abuse of this mode will reduce compression ratio. Use it only when necessary. """ FLUSH_FRAME: ClassVar[Literal[2]] = zstd.ZstdCompressor.FLUSH_FRAME """Used for mode parameter in .compress(), .flush() methods. Flush any remaining data, and close the current frame. Usually used for traditional flush. Since zstd data consists of one or more independent frames, data can still be provided after a frame is closed. Note: Abuse of this mode will reduce compression ratio, and some programs can only decompress single frame data. Use it only when necessary. """ def __init__( self, level_or_option: _LevelOrOption = None, zstd_dict: _ZstdDict = None ) -> None: """Initialize a ZstdCompressor object. Parameters level_or_option: When it's an int object, it represents the compression level. When it's a dict object, it contains advanced compression parameters. zstd_dict: A ZstdDict object, pre-trained zstd dictionary. """ zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 self._compressor = zstd.ZstdCompressor( options=_convert_level_or_option(level_or_option, "w"), zstd_dict=zstd_dict ) def compress( self, data: Buffer, mode: Literal[0, 1, 2] = zstd.ZstdCompressor.CONTINUE ) -> bytes: """Provide data to the compressor object. Return a chunk of compressed data if possible, or b'' otherwise. Parameters data: A bytes-like object, data to be compressed. mode: Can be these 3 values .CONTINUE, .FLUSH_BLOCK, .FLUSH_FRAME. """ return self._compressor.compress(data, mode) def flush(self, mode: Literal[1, 2] = zstd.ZstdCompressor.FLUSH_FRAME) -> bytes: """Flush any remaining data in internal buffer. Since zstd data consists of one or more independent frames, the compressor object can still be used after this method is called. Parameter mode: Can be these 2 values .FLUSH_FRAME, .FLUSH_BLOCK. """ return self._compressor.flush(mode) def _set_pledged_input_size(self, size: int | None) -> None: """*This is an undocumented method, because it may be used incorrectly.* Set uncompressed content size of a frame, the size will be written into the frame header. 1, If called when (.last_mode != .FLUSH_FRAME), a RuntimeError will be raised. 2, If the actual size doesn't match the value, a ZstdError will be raised, and the last compressed chunk is likely to be lost. 3, The size is only valid for one frame, then it restores to "unknown size". Parameter size: Uncompressed content size of a frame, None means "unknown size". """ return self._compressor.set_pledged_input_size(size) @property def last_mode(self) -> Literal[0, 1, 2]: """The last mode used to this compressor object, its value can be .CONTINUE, .FLUSH_BLOCK, .FLUSH_FRAME. Initialized to .FLUSH_FRAME. It can be used to get the current state of a compressor, such as, data flushed, a frame ended. """ return self._compressor.last_mode def __reduce__(self) -> NoReturn: raise TypeError(f"Cannot pickle {type(self)} object.") class ZstdDecompressor: """A streaming decompressor, it stops after a frame is decompressed. Thread-safe at method level.""" def __init__(self, zstd_dict: _ZstdDict = None, option: _Option = None) -> None: """Initialize a ZstdDecompressor object. Parameters zstd_dict: A ZstdDict object, pre-trained zstd dictionary. option: A dict object that contains advanced decompression parameters. """ zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 self._decompressor = zstd.ZstdDecompressor( zstd_dict=zstd_dict, options=_convert_level_or_option(option, "r") ) def decompress(self, data: Buffer, max_length: int = -1) -> bytes: """Decompress data, return a chunk of decompressed data if possible, or b'' otherwise. It stops after a frame is decompressed. Parameters data: A bytes-like object, zstd data to be decompressed. max_length: Maximum size of returned data. When it is negative, the size of output buffer is unlimited. When it is nonnegative, returns at most max_length bytes of decompressed data. """ return self._decompressor.decompress(data, max_length) @property def eof(self) -> bool: """True means the end of the first frame has been reached. If decompress data after that, an EOFError exception will be raised.""" return self._decompressor.eof @property def needs_input(self) -> bool: """If the max_length output limit in .decompress() method has been reached, and the decompressor has (or may has) unconsumed input data, it will be set to False. In this case, pass b'' to .decompress() method may output further data. """ return self._decompressor.needs_input @property def unused_data(self) -> bytes: """A bytes object. When ZstdDecompressor object stops after a frame is decompressed, unused input data after the frame. Otherwise this will be b''.""" return self._decompressor.unused_data def __reduce__(self) -> NoReturn: raise TypeError(f"Cannot pickle {type(self)} object.") class EndlessZstdDecompressor: """A streaming decompressor, accepts multiple concatenated frames. Thread-safe at method level.""" def __init__(self, zstd_dict: _ZstdDict = None, option: _Option = None) -> None: """Initialize an EndlessZstdDecompressor object. Parameters zstd_dict: A ZstdDict object, pre-trained zstd dictionary. option: A dict object that contains advanced decompression parameters. """ self._zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 self._options = _convert_level_or_option(option, "r") self._reset() def _reset(self, data: bytes = b"") -> None: self._decompressor = zstd.ZstdDecompressor( zstd_dict=self._zstd_dict, options=self._options ) self._buffer = data self._at_frame_edge = not data def decompress(self, data: Buffer, max_length: int = -1) -> bytes: """Decompress data, return a chunk of decompressed data if possible, or b'' otherwise. Parameters data: A bytes-like object, zstd data to be decompressed. max_length: Maximum size of returned data. When it is negative, the size of output buffer is unlimited. When it is nonnegative, returns at most max_length bytes of decompressed data. """ if not isinstance(data, bytes) or not isinstance(max_length, int): raise TypeError self._buffer += data self._at_frame_edge &= not self._buffer out = b"" while True: try: out += self._decompressor.decompress(self._buffer, max_length) except ZstdError: self._reset() raise if self._decompressor.eof: self._reset(self._decompressor.unused_data) max_length -= len(out) else: self._buffer = b"" break return out @property def at_frame_edge(self) -> bool: """True when both the input and output streams are at a frame edge, means a frame is completely decoded and fully flushed, or the decompressor just be initialized. This flag could be used to check data integrity in some cases. """ return self._at_frame_edge @property def needs_input(self) -> bool: """If the max_length output limit in .decompress() method has been reached, and the decompressor has (or may has) unconsumed input data, it will be set to False. In this case, pass b'' to .decompress() method may output further data. """ return not self._buffer and ( self._at_frame_edge or self._decompressor.needs_input ) def __reduce__(self) -> NoReturn: raise TypeError(f"Cannot pickle {type(self)} object.") def compress( data: Buffer, level_or_option: _LevelOrOption = None, zstd_dict: _ZstdDict = None ) -> bytes: """Compress a block of data, return a bytes object. Compressing b'' will get an empty content frame (9 bytes or more). Parameters data: A bytes-like object, data to be compressed. level_or_option: When it's an int object, it represents compression level. When it's a dict object, it contains advanced compression parameters. zstd_dict: A ZstdDict object, pre-trained dictionary for compression. """ zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 return zstd.compress( data, options=_convert_level_or_option(level_or_option, "w"), zstd_dict=zstd_dict, ) def decompress( data: Buffer, zstd_dict: _ZstdDict = None, option: _Option = None ) -> bytes: """Decompress a zstd data, return a bytes object. Support multiple concatenated frames. Parameters data: A bytes-like object, compressed zstd data. zstd_dict: A ZstdDict object, pre-trained zstd dictionary. option: A dict object, contains advanced decompression parameters. """ zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 return zstd.decompress( data, options=_convert_level_or_option(option, "r"), zstd_dict=zstd_dict ) @deprecated( "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.RichMemZstdCompressor" ) class RichMemZstdCompressor: def __init__( self, level_or_option: _LevelOrOption = None, zstd_dict: _ZstdDict = None ) -> None: self._options = _convert_level_or_option(level_or_option, "w") self._zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 def compress(self, data: Buffer) -> bytes: return zstd.compress(data, options=self._options, zstd_dict=self._zstd_dict) def __reduce__(self) -> NoReturn: raise TypeError(f"Cannot pickle {type(self)} object.") richmem_compress = deprecated( "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.richmem_compress" )(compress) class ZstdFile(zstd.ZstdFile): """A file object providing transparent zstd (de)compression. A ZstdFile can act as a wrapper for an existing file object, or refer directly to a named file on disk. Note that ZstdFile provides a *binary* file interface - data read is returned as bytes, and data to be written should be an object that supports the Buffer Protocol. """ def __init__( self, filename: _StrOrBytesPath | BinaryIO, mode: Literal["r", "rb", "w", "wb", "x", "xb", "a", "ab"] = "r", *, level_or_option: _LevelOrOption | _Option = None, zstd_dict: _ZstdDict = None, read_size: int | _DeprecatedPlaceholder = _DEPRECATED_PLACEHOLDER, write_size: int | _DeprecatedPlaceholder = _DEPRECATED_PLACEHOLDER, ) -> None: """Open a zstd compressed file in binary mode. filename can be either an actual file name (given as a str, bytes, or PathLike object), in which case the named file is opened, or it can be an existing file object to read from or write to. mode can be "r" for reading (default), "w" for (over)writing, "x" for creating exclusively, or "a" for appending. These can equivalently be given as "rb", "wb", "xb" and "ab" respectively. Parameters level_or_option: When it's an int object, it represents compression level. When it's a dict object, it contains advanced compression parameters. Note, in read mode (decompression), it can only be a dict object, that represents decompression option. It doesn't support int type compression level in this case. zstd_dict: A ZstdDict object, pre-trained dictionary for compression / decompression. """ if read_size != _DEPRECATED_PLACEHOLDER: warnings.warn( "pyzstd.ZstdFile()'s read_size parameter is deprecated", DeprecationWarning, stacklevel=2, ) if write_size != _DEPRECATED_PLACEHOLDER: warnings.warn( "pyzstd.ZstdFile()'s write_size parameter is deprecated", DeprecationWarning, stacklevel=2, ) zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 super().__init__( filename, mode, options=_convert_level_or_option(level_or_option, mode), zstd_dict=zstd_dict, ) @overload def open( # noqa: A001 filename: _StrOrBytesPath | BinaryIO, mode: Literal["r", "rb", "w", "wb", "a", "ab", "x", "xb"] = "rb", *, level_or_option: _LevelOrOption | _Option = None, zstd_dict: _ZstdDict = None, encoding: None = None, errors: None = None, newline: None = None, ) -> zstd.ZstdFile: ... @overload def open( # noqa: A001 filename: _StrOrBytesPath | BinaryIO, mode: Literal["rt", "wt", "at", "xt"], *, level_or_option: _LevelOrOption | _Option = None, zstd_dict: _ZstdDict = None, encoding: str | None = None, errors: str | None = None, newline: str | None = None, ) -> TextIOWrapper: ... def open( # noqa: A001 filename: _StrOrBytesPath | BinaryIO, mode: Literal[ "r", "rb", "w", "wb", "a", "ab", "x", "xb", "rt", "wt", "at", "xt" ] = "rb", *, level_or_option: _LevelOrOption | _Option = None, zstd_dict: _ZstdDict = None, encoding: str | None = None, errors: str | None = None, newline: str | None = None, ) -> zstd.ZstdFile | TextIOWrapper: """Open a zstd compressed file in binary or text mode. filename can be either an actual file name (given as a str, bytes, or PathLike object), in which case the named file is opened, or it can be an existing file object to read from or write to. The mode parameter can be "r", "rb" (default), "w", "wb", "x", "xb", "a", "ab" for binary mode, or "rt", "wt", "xt", "at" for text mode. The level_or_option and zstd_dict parameters specify the settings, as for ZstdCompressor, ZstdDecompressor and ZstdFile. When using read mode (decompression), the level_or_option parameter can only be a dict object, that represents decompression option. It doesn't support int type compression level in this case. For binary mode, this function is equivalent to the ZstdFile constructor: ZstdFile(filename, mode, ...). In this case, the encoding, errors and newline parameters must not be provided. For text mode, an ZstdFile object is created, and wrapped in an io.TextIOWrapper instance with the specified encoding, error handling behavior, and line ending(s). """ zstd_dict = cast( "ZstdDict | None", zstd_dict ) # https://github.com/python/typeshed/pull/15113 return zstd.open( filename, mode, options=_convert_level_or_option(level_or_option, mode), zstd_dict=zstd_dict, encoding=encoding, errors=errors, newline=newline, ) def _create_callback( output_stream: BinaryIO | None, callback: Callable[[int, int, memoryview, memoryview], None] | None, ) -> Callable[[int, int, bytes, bytes], None]: if output_stream is None: if callback is None: raise TypeError( "At least one of output_stream argument and callback argument should be non-None." ) def cb( total_input: int, total_output: int, data_in: bytes, data_out: bytes ) -> None: callback( total_input, total_output, memoryview(data_in), memoryview(data_out) ) elif callback is None: def cb( total_input: int, # noqa: ARG001 total_output: int, # noqa: ARG001 data_in: bytes, # noqa: ARG001 data_out: bytes, ) -> None: output_stream.write(data_out) else: def cb( total_input: int, total_output: int, data_in: bytes, data_out: bytes ) -> None: output_stream.write(data_out) callback( total_input, total_output, memoryview(data_in), memoryview(data_out) ) return cb @deprecated( "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.compress_stream" ) def compress_stream( input_stream: BinaryIO, output_stream: BinaryIO | None, *, level_or_option: _LevelOrOption = None, zstd_dict: _ZstdDict = None, pledged_input_size: int | None = None, read_size: int = 131_072, write_size: int | _DeprecatedPlaceholder = _DEPRECATED_PLACEHOLDER, # noqa: ARG001 callback: Callable[[int, int, memoryview, memoryview], None] | None = None, ) -> tuple[int, int]: """Compresses input_stream and writes the compressed data to output_stream, it doesn't close the streams. ---- DEPRECATION NOTICE The (de)compress_stream are deprecated and will be removed in a future version. See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives ---- If input stream is b'', nothing will be written to output stream. Return a tuple, (total_input, total_output), the items are int objects. Parameters input_stream: Input stream that has a .readinto(b) method. output_stream: Output stream that has a .write(b) method. If use callback function, this parameter can be None. level_or_option: When it's an int object, it represents the compression level. When it's a dict object, it contains advanced compression parameters. zstd_dict: A ZstdDict object, pre-trained zstd dictionary. pledged_input_size: If set this parameter to the size of input data, the size will be written into the frame header. If the actual input data doesn't match it, a ZstdError will be raised. read_size: Input buffer size, in bytes. callback: A callback function that accepts four parameters: (total_input, total_output, read_data, write_data), the first two are int objects, the last two are readonly memoryview objects. """ if not hasattr(input_stream, "read"): raise TypeError("input_stream argument should have a .read() method.") if output_stream is not None and not hasattr(output_stream, "write"): raise TypeError("output_stream argument should have a .write() method.") if read_size < 1: raise ValueError("read_size argument should be a positive number.") callback = _create_callback(output_stream, callback) total_input = 0 total_output = 0 compressor = ZstdCompressor(level_or_option, zstd_dict) if pledged_input_size is not None and pledged_input_size != 2**64 - 1: compressor._set_pledged_input_size(pledged_input_size) # noqa: SLF001 while data_in := input_stream.read(read_size): total_input += len(data_in) data_out = compressor.compress(data_in) total_output += len(data_out) callback(total_input, total_output, data_in, data_out) if not total_input: return total_input, total_output data_out = compressor.flush() total_output += len(data_out) callback(total_input, total_output, b"", data_out) return total_input, total_output @deprecated( "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.decompress_stream" ) def decompress_stream( input_stream: BinaryIO, output_stream: BinaryIO | None, *, zstd_dict: _ZstdDict = None, option: _Option = None, read_size: int = 131_075, write_size: int = 131_072, callback: Callable[[int, int, memoryview, memoryview], None] | None = None, ) -> tuple[int, int]: """Decompresses input_stream and writes the decompressed data to output_stream, it doesn't close the streams. ---- DEPRECATION NOTICE The (de)compress_stream are deprecated and will be removed in a future version. See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives ---- Supports multiple concatenated frames. Return a tuple, (total_input, total_output), the items are int objects. Parameters input_stream: Input stream that has a .readinto(b) method. output_stream: Output stream that has a .write(b) method. If use callback function, this parameter can be None. zstd_dict: A ZstdDict object, pre-trained zstd dictionary. option: A dict object, contains advanced decompression parameters. read_size: Input buffer size, in bytes. write_size: Output buffer size, in bytes. callback: A callback function that accepts four parameters: (total_input, total_output, read_data, write_data), the first two are int objects, the last two are readonly memoryview objects. """ if not hasattr(input_stream, "read"): raise TypeError("input_stream argument should have a .read() method.") if output_stream is not None and not hasattr(output_stream, "write"): raise TypeError("output_stream argument should have a .write() method.") if read_size < 1 or write_size < 1: raise ValueError( "read_size argument and write_size argument should be positive numbers." ) callback = _create_callback(output_stream, callback) total_input = 0 total_output = 0 decompressor = EndlessZstdDecompressor(zstd_dict, option) while True: if decompressor.needs_input: data_in = input_stream.read(read_size) if not data_in: break else: data_in = b"" total_input += len(data_in) data_out = decompressor.decompress(data_in, write_size) total_output += len(data_out) callback(total_input, total_output, data_in, data_out) if not decompressor.at_frame_edge: raise ZstdError( "Decompression failed: zstd data ends in an incomplete frame," " maybe the input data was truncated." f" Total input {total_input} bytes, total output {total_output} bytes." ) return total_input, total_output class CompressionValues(NamedTuple): default: int min: int max: int compressionLevel_values = CompressionValues( # noqa: N816 zstd.COMPRESSION_LEVEL_DEFAULT, *CParameter.compressionLevel.bounds() ) zstd_support_multithread = CParameter.nbWorkers.bounds() != (0, 0) # import here to avoid circular dependency issues from ._seekable_zstdfile import SeekableFormatError, SeekableZstdFile # noqa: E402 pyzstd-0.19.1/src/pyzstd/__main__.py0000644000000000000000000004210613615410400014300 0ustar00# CLI of pyzstd module: python -m pyzstd --help import argparse from collections.abc import Mapping, Sequence import os from shutil import copyfileobj from time import time from typing import Any, BinaryIO, Protocol, cast from pyzstd import ( CParameter, DParameter, ZstdDict, ZstdFile, compressionLevel_values, train_dict, zstd_version, ) from pyzstd import __version__ as pyzstd_version class Args(Protocol): dict: str f: bool compress: str tar_input_dir: str level: int threads: int long: int checksum: bool write_dictID: bool # noqa: N815 decompress: str tar_output_dir: str test: str | None windowLogMax: int # noqa: N815 train: str maxdict: int dictID: int # noqa: N815 output: BinaryIO | None input: BinaryIO | None zd: ZstdDict | None # buffer sizes recommended by zstd C_READ_BUFFER = 131072 D_READ_BUFFER = 131075 # open output file and assign to args.output def open_output(args: Args, path: str) -> None: if not args.f and os.path.isfile(path): answer = input(f"output file already exists:\n{path}\noverwrite? (y/n) ") print() if answer != "y": import sys sys.exit() args.output = open(path, "wb") # noqa: SIM115 def close_files(args: Args) -> None: if args.input is not None: args.input.close() if args.output is not None: args.output.close() def compress_option(args: Args) -> Mapping[int, int]: # threads message if args.threads == 0: threads_msg = "single-thread mode" else: threads_msg = f"multi-thread mode, {args.threads} threads." # long mode if args.long >= 0: use_long = 1 window_log = args.long long_msg = f"yes, windowLog is {window_log}." else: use_long = 0 window_log = 0 long_msg = "no" # option option: Mapping[int, int] = { CParameter.compressionLevel: args.level, CParameter.nbWorkers: args.threads, CParameter.enableLongDistanceMatching: use_long, CParameter.windowLog: window_log, CParameter.checksumFlag: args.checksum, CParameter.dictIDFlag: args.write_dictID, } # pre-compress message msg = ( f" - compression level: {args.level}\n" f" - threads: {threads_msg}\n" f" - long mode: {long_msg}\n" f" - zstd dictionary: {args.zd}\n" f" - add checksum: {args.checksum}" ) print(msg) return option def compress(args: Args) -> None: args.input = cast("BinaryIO", args.input) # output file if args.output is None: open_output(args, args.input.name + ".zst") args.output = cast("BinaryIO", args.output) # pre-compress message msg = ( "Compress file:\n" f" - input file : {args.input.name}\n" f" - output file: {args.output.name}" ) print(msg) # option option = compress_option(args) # compress t1 = time() with ZstdFile(args.output, "w", level_or_option=option, zstd_dict=args.zd) as fout: copyfileobj(args.input, fout) t2 = time() in_size = args.input.tell() out_size = args.output.tell() close_files(args) # post-compress message ratio = 100.0 if in_size == 0 else 100 * out_size / in_size msg = ( f"\nCompression succeeded, {t2 - t1:.2f} seconds.\n" f"Input {in_size:,} bytes, output {out_size:,} bytes, ratio {ratio:.2f}%.\n" ) print(msg) def decompress(args: Args) -> None: args.input = cast("BinaryIO", args.input) # output file if args.output is None: if args.test is None: from re import subn out_path, replaced = subn(r"(?i)^(.*)\.zst$", r"\1", args.input.name) if not replaced: out_path = args.input.name + ".decompressed" else: out_path = os.devnull open_output(args, out_path) args.output = cast("BinaryIO", args.output) # option option: Mapping[int, int] = {DParameter.windowLogMax: args.windowLogMax} # pre-decompress message output_name = args.output.name if output_name == os.devnull: output_name = "None" print( "Decompress file:\n" f" - input file : {args.input.name}\n" f" - output file: {output_name}\n" f" - zstd dictionary: {args.zd}" ) # decompress t1 = time() with ZstdFile(args.input, level_or_option=option, zstd_dict=args.zd) as fin: copyfileobj(fin, args.output) t2 = time() in_size = args.input.tell() out_size = args.output.tell() close_files(args) # post-decompress message ratio = 100.0 if out_size == 0 else 100 * in_size / out_size msg = ( f"\nDecompression succeeded, {t2 - t1:.2f} seconds.\n" f"Input {in_size:,} bytes, output {out_size:,} bytes, ratio {ratio:.2f}%.\n" ) print(msg) def train(args: Args) -> None: from glob import glob # check output file if args.output is None: raise ValueError("need to specify output file using -o/--output option") # gather samples print("Gathering samples, please wait.", flush=True) lst = [] for file in glob(args.train, recursive=True): with open(file, "rb") as f: dat = f.read() lst.append(dat) print("samples count:", len(lst), end="\r", flush=True) if len(lst) == 0: raise ValueError("No samples gathered, please check GLOB_PATH.") samples_size = sum(len(sample) for sample in lst) if samples_size == 0: raise ValueError("Samples content is empty, can't train.") # pre-train message msg = ( "Gathered, train zstd dictionary:\n" " - samples: {}\n" " - samples number: {}\n" " - samples content: {:,} bytes\n" " - dict file: {}\n" " - dict max size: {:,} bytes\n" " - dict id: {}\n" "Training, please wait." ).format( args.train, len(lst), samples_size, args.output.name, args.maxdict, "random" if args.dictID is None else args.dictID, ) print(msg, flush=True) # train t1 = time() zd = train_dict(lst, args.maxdict) t2 = time() # Dictionary_ID: 4 bytes, stored in little-endian format. # it can be any value, except 0 (which means no Dictionary_ID). if args.dictID is not None and len(zd.dict_content) >= 8: content = ( zd.dict_content[:4] + args.dictID.to_bytes(4, "little") + zd.dict_content[8:] ) zd = ZstdDict(content) # save to file args.output.write(zd.dict_content) close_files(args) # post-train message msg = f"Training succeeded, {t2 - t1:.2f} seconds.\nDictionary: {zd}\n" print(msg) def tarfile_create(args: Args) -> None: import sys if sys.version_info < (3, 14): from backports.zstd import tarfile else: import tarfile # check input dir args.tar_input_dir = args.tar_input_dir.rstrip(os.sep) if not os.path.isdir(args.tar_input_dir): msg = "Tar archive input dir invalid: " + args.tar_input_dir raise NotADirectoryError(msg) dirname, basename = os.path.split(args.tar_input_dir) # check output file if args.output is None: out_path = os.path.join(dirname, basename + ".tar.zst") open_output(args, out_path) args.output = cast("BinaryIO", args.output) # pre-compress message msg = ( "Archive tar file:\n" f" - input directory: {args.tar_input_dir}\n" f" - output file: {args.output.name}" ) print(msg) # option option = compress_option(args) # compress print("Archiving, please wait.", flush=True) t1 = time() with tarfile.TarFile.zstopen( None, fileobj=args.output, mode="w", options=option, zstd_dict=args.zd ) as f: f.add(args.tar_input_dir, basename) uncompressed_size = f.fileobj.tell() # type: ignore[union-attr] t2 = time() output_file_size = args.output.tell() close_files(args) # post-compress message if uncompressed_size != 0: ratio = 100 * output_file_size / uncompressed_size else: ratio = 100.0 msg = ( f"Archiving succeeded, {t2 - t1:.2f} seconds.\n" f"Input ~{uncompressed_size:,} bytes, output {output_file_size:,} bytes, ratio {ratio:.2f}%.\n" ) print(msg) def tarfile_extract(args: Args) -> None: import sys if sys.version_info < (3, 14): from backports.zstd import tarfile else: import tarfile # input file size if args.input is None: msg = "need to specify input file using -d/--decompress option." raise FileNotFoundError(msg) input_file_size = os.path.getsize(args.input.name) # check output dir if not os.path.isdir(args.tar_output_dir): msg = "Tar archive output dir invalid: " + args.tar_output_dir raise NotADirectoryError(msg) # option option: Mapping[int, int] = {DParameter.windowLogMax: args.windowLogMax} # pre-extract message msg = ( "Extract tar archive:\n" f" - input file: {args.input.name}\n" f" - output dir: {args.tar_output_dir}\n" f" - zstd dictionary: {args.zd}\n" "Extracting, please wait." ) print(msg, flush=True) # extract t1 = time() with tarfile.TarFile.zstopen( None, fileobj=args.input, mode="r", zstd_dict=args.zd, options=option ) as f: f.extractall(args.tar_output_dir, filter="data") uncompressed_size = f.fileobj.tell() # type: ignore[union-attr] t2 = time() close_files(args) # post-extract message if uncompressed_size != 0: ratio = 100 * input_file_size / uncompressed_size else: ratio = 100.0 msg = ( f"Extraction succeeded, {t2 - t1:.2f} seconds.\n" f"Input {input_file_size:,} bytes, output ~{uncompressed_size:,} bytes, ratio {ratio:.2f}%.\n" ) print(msg) def range_action(start: int, end: int) -> type[argparse.Action]: class RangeAction(argparse.Action): def __call__( self, _: object, namespace: object, values: str | Sequence[Any] | None, option_string: str | None = None, ) -> None: # convert to int try: v = int(values) # type: ignore[arg-type] except ValueError: raise TypeError(f"{option_string} should be an integer") from None # check range if not (start <= v <= end): # message msg = ( f"{option_string} value should: {start} <= v <= {end}. " f"provided value is {v}." ) raise ValueError(msg) setattr(namespace, self.dest, v) return RangeAction def parse_arg() -> Args: p = argparse.ArgumentParser( prog="CLI of pyzstd module", description=( "The command style is similar to zstd's " "CLI, but there are some differences.\n" "Zstd's CLI should be faster, it has " "some I/O optimizations." ), epilog=( "Examples of use:\n" " compress a file:\n" " python -m pyzstd -c IN_FILE -o OUT_FILE\n" " decompress a file:\n" " python -m pyzstd -d IN_FILE -o OUT_FILE\n" " create a tar archive:\n" " python -m pyzstd --tar-input-dir DIR -o OUT_FILE\n" " extract a tar archive, output will forcibly overwrite existing files:\n" " python -m pyzstd -d IN_FILE --tar-output-dir DIR\n" " train a zstd dictionary, ** traverses sub-directories:\n" ' python -m pyzstd --train "E:\\cpython\\**\\*.c" -o OUT_FILE' ), formatter_class=argparse.RawDescriptionHelpFormatter, ) g = p.add_argument_group("Common arguments") g.add_argument( "-D", "--dict", metavar="FILE", type=argparse.FileType("rb"), help="use FILE as zstd dictionary for compression or decompression", ) g.add_argument( "-o", "--output", metavar="FILE", type=str, help="result stored into FILE" ) g.add_argument( "-f", action="store_true", help="disable output check, allows overwriting existing file.", ) g = p.add_argument_group("Compression arguments") gm = g.add_mutually_exclusive_group() gm.add_argument("-c", "--compress", metavar="FILE", type=str, help="compress FILE") gm.add_argument( "--tar-input-dir", metavar="DIR", type=str, help=( "create a tar archive from DIR. this option overrides -c/--compress option." ), ) g.add_argument( "-l", "--level", metavar="#", default=compressionLevel_values.default, action=range_action(compressionLevel_values.min, compressionLevel_values.max), help=f"compression level, range: [{compressionLevel_values.min},{compressionLevel_values.max}], default: {compressionLevel_values.default}.", ) g.add_argument( "-t", "--threads", metavar="#", default=0, action=range_action(*CParameter.nbWorkers.bounds()), help=( "spawns # threads to compress. if this option is not " "specified or is 0, use single thread mode." ), ) g.add_argument( "--long", metavar="#", nargs="?", const=27, default=-1, action=range_action(*CParameter.windowLog.bounds()), help="enable long distance matching with given windowLog (default #: 27)", ) g.add_argument( "--no-checksum", action="store_false", dest="checksum", default=True, help="don't add 4-byte XXH64 checksum to the frame", ) g.add_argument( "--no-dictID", action="store_false", dest="write_dictID", default=True, help="don't write dictID into frame header (dictionary compression only)", ) g = p.add_argument_group("Decompression arguments") gm = g.add_mutually_exclusive_group() gm.add_argument( "-d", "--decompress", metavar="FILE", type=str, help="decompress FILE" ) g.add_argument( "--tar-output-dir", metavar="DIR", type=str, help=( "extract tar archive to DIR, " "output will forcibly overwrite existing files. " "this option overrides -o/--output option." ), ) gm.add_argument( "--test", metavar="FILE", type=str, help="try to decompress FILE to check integrity", ) g.add_argument( "--windowLogMax", metavar="#", default=0, action=range_action(*DParameter.windowLogMax.bounds()), help="set a memory usage limit for decompression (windowLogMax)", ) g = p.add_argument_group("Dictionary builder") g.add_argument( "--train", metavar="GLOB_PATH", type=str, help="create a dictionary from a training set of files", ) g.add_argument( "--maxdict", metavar="SIZE", type=int, default=112640, help="limit dictionary to SIZE bytes (default: 112640)", ) g.add_argument( "--dictID", metavar="DICT_ID", default=None, action=range_action(1, 0xFFFFFFFF), help="specify dictionary ID value (default: random)", ) args = p.parse_args() # input file if args.compress is not None: args.input = open(args.compress, "rb", buffering=C_READ_BUFFER) # noqa: SIM115 elif args.decompress is not None: args.input = open(args.decompress, "rb", buffering=D_READ_BUFFER) # noqa: SIM115 elif args.test is not None: args.input = open(args.test, "rb", buffering=D_READ_BUFFER) # noqa: SIM115 else: args.input = None # output file if args.output is not None: open_output(args, args.output) # load dictionary if args.dict is not None: zd_content = args.dict.read() args.dict.close() # Magic_Number: 4 bytes, value 0xEC30A437, little-endian format. is_raw = zd_content[:4] != b"\x37\xa4\x30\xec" args.zd = ZstdDict(zd_content, is_raw=is_raw) else: args.zd = None # arguments combination functions = [ args.compress, args.decompress, args.test, args.train, args.tar_input_dir, ] if sum(1 for i in functions if i is not None) > 1: raise ValueError("Wrong arguments combination") return args def main() -> None: print(f"*** pyzstd module v{pyzstd_version}, zstd library v{zstd_version}. ***\n") args = parse_arg() if args.tar_input_dir: tarfile_create(args) elif args.tar_output_dir: tarfile_extract(args) elif args.compress: compress(args) elif args.decompress or args.test: decompress(args) elif args.train: train(args) else: print("Invalid command. See help: python -m pyzstd --help") if __name__ == "__main__": main() pyzstd-0.19.1/src/pyzstd/_seekable_zstdfile.py0000644000000000000000000011020413615410400016371 0ustar00from array import array from bisect import bisect_right import io from os import PathLike from os.path import isfile from struct import Struct import sys from typing import BinaryIO, ClassVar, Literal, cast import warnings from pyzstd import ( _DEPRECATED_PLACEHOLDER, ZstdCompressor, ZstdDecompressor, _DeprecatedPlaceholder, _LevelOrOption, _Option, _StrOrBytesPath, _ZstdDict, ) if sys.version_info < (3, 12): from typing_extensions import Buffer else: from collections.abc import Buffer if sys.version_info < (3, 11): from typing_extensions import Self else: from typing import Self __all__ = ("SeekableFormatError", "SeekableZstdFile") _MODE_CLOSED = 0 _MODE_READ = 1 _MODE_WRITE = 2 class SeekableFormatError(Exception): "An error related to Zstandard Seekable Format." def __init__(self, msg: str) -> None: super().__init__("Zstandard Seekable Format error: " + msg) __doc__ = """\ Zstandard Seekable Format (Ver 0.1.0, Apr 2017) Square brackets are used to indicate optional fields. All numeric fields are little-endian unless specified otherwise. A. Seek table is a skippable frame at the end of file: Magic_Number Frame_Size [Seek_Table_Entries] Seek_Table_Footer 4 bytes 4 bytes 8-12 bytes each 9 bytes Magic_Number must be 0x184D2A5E. B. Seek_Table_Entries: Compressed_Size Decompressed_Size [Checksum] 4 bytes 4 bytes 4 bytes Checksum is optional. C. Seek_Table_Footer: Number_Of_Frames Seek_Table_Descriptor Seekable_Magic_Number 4 bytes 1 byte 4 bytes Seekable_Magic_Number must be 0x8F92EAB1. D. Seek_Table_Descriptor: Bit_number Field_name 7 Checksum_Flag 6-2 Reserved_Bits (should ensure they are set to 0) 1-0 Unused_Bits (should not interpret these bits)""" __format_version__ = "0.1.0" class _SeekTable: _s_2uint32 = Struct(" None: self._read_mode = read_mode self._clear_seek_table() def _clear_seek_table(self) -> None: self._has_checksum = False # The seek table frame size, used for append mode. self._seek_frame_size = 0 # The file size, used for seeking to EOF. self._file_size = 0 self._frames_count = 0 self._full_c_size = 0 self._full_d_size = 0 if self._read_mode: # Item: cumulated_size # Length: frames_count + 1 # q is int64_t. On Linux/macOS/Windows, Py_off_t is signed, so # ZstdFile/SeekableZstdFile use int64_t as file position/size. self._cumulated_c_size = array("q", [0]) self._cumulated_d_size = array("q", [0]) else: # Item: (c_size1, d_size1, # c_size2, d_size2, # c_size3, d_size3, # ...) # Length: frames_count * 2 # I is uint32_t. self._frames = array("I") def append_entry(self, compressed_size: int, decompressed_size: int) -> None: if compressed_size == 0: if decompressed_size == 0: # (0, 0) frame is no sense return # Impossible frame raise ValueError self._frames_count += 1 self._full_c_size += compressed_size self._full_d_size += decompressed_size if self._read_mode: self._cumulated_c_size.append(self._full_c_size) self._cumulated_d_size.append(self._full_d_size) else: self._frames.append(compressed_size) self._frames.append(decompressed_size) # seek_to_0 is True or False. # In read mode, seeking to 0 is necessary. def load_seek_table(self, fp: BinaryIO, seek_to_0: bool) -> None: # noqa: FBT001 # Get file size fsize = fp.seek(0, 2) # 2 is SEEK_END if fsize == 0: return if fsize < 17: # 17=4+4+9 msg = ( "File size is less than the minimal size " "(17 bytes) of Zstandard Seekable Format." ) raise SeekableFormatError(msg) # Read footer fp.seek(-9, 2) # 2 is SEEK_END footer = fp.read(9) frames_number, descriptor, magic_number = self._s_footer.unpack(footer) # Check format if magic_number != 0x8F92EAB1: msg = ( "The last 4 bytes of the file is not Zstandard Seekable " 'Format Magic Number (b"\\xb1\\xea\\x92\\x8f)". ' "SeekableZstdFile class only supports Zstandard Seekable " "Format file or 0-size file. To read a zstd file that is " "not in Zstandard Seekable Format, use ZstdFile class." ) raise SeekableFormatError(msg) # Seek_Table_Descriptor self._has_checksum = descriptor & 0b10000000 if descriptor & 0b01111100: msg = ( f"In Zstandard Seekable Format version {__format_version__}, the " "Reserved_Bits in Seek_Table_Descriptor must be 0." ) raise SeekableFormatError(msg) # Frame size entry_size = 12 if self._has_checksum else 8 skippable_frame_size = 17 + frames_number * entry_size if fsize < skippable_frame_size: raise SeekableFormatError( "File size is less than expected size of the seek table frame." ) # Read seek table fp.seek(-skippable_frame_size, 2) # 2 is SEEK_END skippable_frame = fp.read(skippable_frame_size) skippable_magic_number, content_size = self._s_2uint32.unpack_from( skippable_frame, 0 ) # Check format if skippable_magic_number != 0x184D2A5E: msg = "Seek table frame's Magic_Number is wrong." raise SeekableFormatError(msg) if content_size != skippable_frame_size - 8: msg = "Seek table frame's Frame_Size is wrong." raise SeekableFormatError(msg) # No more fp operations if seek_to_0: fp.seek(0) # Parse seek table offset = 8 for idx in range(frames_number): if self._has_checksum: compressed_size, decompressed_size, _ = self._s_3uint32.unpack_from( skippable_frame, offset ) offset += 12 else: compressed_size, decompressed_size = self._s_2uint32.unpack_from( skippable_frame, offset ) offset += 8 # Check format if compressed_size == 0 and decompressed_size != 0: msg = ( f"Wrong seek table. The index {idx} frame (0-based) " "is 0 size, but decompressed size is non-zero, " "this is impossible." ) raise SeekableFormatError(msg) # Append to seek table self.append_entry(compressed_size, decompressed_size) # Check format if self._full_c_size > fsize - skippable_frame_size: msg = ( f"Wrong seek table. Since index {idx} frame (0-based), " "the cumulated compressed size is greater than " "file size." ) raise SeekableFormatError(msg) # Check format if self._full_c_size != fsize - skippable_frame_size: raise SeekableFormatError("The cumulated compressed size is wrong") # Parsed successfully, save for future use. self._seek_frame_size = skippable_frame_size self._file_size = fsize # Find frame index by decompressed position def index_by_dpos(self, pos: int) -> int | None: # Array's first item is 0, so need this. pos = max(pos, 0) i = bisect_right(self._cumulated_d_size, pos) if i != self._frames_count + 1: return i # None means >= EOF return None def get_frame_sizes(self, i: int) -> tuple[int, int]: return (self._cumulated_c_size[i - 1], self._cumulated_d_size[i - 1]) def get_full_c_size(self) -> int: return self._full_c_size def get_full_d_size(self) -> int: return self._full_d_size # Merge the seek table to max_frames frames. # The format allows up to 0xFFFF_FFFF frames. When frames # number exceeds, use this method to merge. def _merge_frames(self, max_frames: int) -> None: if self._frames_count <= max_frames: return # Clear the table arr = self._frames a, b = divmod(self._frames_count, max_frames) self._clear_seek_table() # Merge frames pos = 0 for i in range(max_frames): # Slice length length = (a + (1 if i < b else 0)) * 2 # Merge c_size = 0 d_size = 0 for j in range(pos, pos + length, 2): c_size += arr[j] d_size += arr[j + 1] self.append_entry(c_size, d_size) pos += length def write_seek_table(self, fp: BinaryIO) -> None: # Exceeded format limit if self._frames_count > 0xFFFFFFFF: # Emit a warning warnings.warn( f"SeekableZstdFile's seek table has {self._frames_count} entries, " "which exceeds the maximal value allowed by " "Zstandard Seekable Format (0xFFFFFFFF). The " "entries will be merged into 0xFFFFFFFF entries, " "this may reduce seeking performance.", RuntimeWarning, 3, ) # Merge frames self._merge_frames(0xFFFFFFFF) # The skippable frame offset = 0 size = 17 + 8 * self._frames_count ba = bytearray(size) # Header self._s_2uint32.pack_into(ba, offset, 0x184D2A5E, size - 8) offset += 8 # Entries iter_frames = iter(self._frames) for frame_c, frame_d in zip(iter_frames, iter_frames, strict=True): self._s_2uint32.pack_into(ba, offset, frame_c, frame_d) offset += 8 # Footer self._s_footer.pack_into(ba, offset, self._frames_count, 0, 0x8F92EAB1) # Write fp.write(ba) @property def seek_frame_size(self) -> int: return self._seek_frame_size @property def file_size(self) -> int: return self._file_size def __len__(self) -> int: return self._frames_count def get_info(self) -> tuple[int, int, int]: return (self._frames_count, self._full_c_size, self._full_d_size) class _EOFSuccess(EOFError): # noqa: N818 pass class _SeekableDecompressReader(io.RawIOBase): def __init__( self, fp: BinaryIO, zstd_dict: _ZstdDict, option: _Option, read_size: int ) -> None: # Check fp readable/seekable if not hasattr(fp, "readable") or not hasattr(fp, "seekable"): raise TypeError( "In SeekableZstdFile's reading mode, the file object should " "have .readable()/.seekable() methods." ) if not fp.readable(): raise TypeError( "In SeekableZstdFile's reading mode, the file object should " "be readable." ) if not fp.seekable(): raise TypeError( "In SeekableZstdFile's reading mode, the file object should " "be seekable. If the file object is not seekable, it can be " "read sequentially using ZstdFile class." ) self._fp = fp self._zstd_dict = zstd_dict self._option = option self._read_size = read_size # Load seek table self._seek_table = _SeekTable(read_mode=True) self._seek_table.load_seek_table(fp, seek_to_0=True) self._size = self._seek_table.get_full_d_size() self._pos = 0 self._decompressor: ZstdDecompressor | None = ZstdDecompressor( self._zstd_dict, self._option ) def close(self) -> None: self._decompressor = None return super().close() def readable(self) -> bool: return True def seekable(self) -> bool: return True def tell(self) -> int: return self._pos def _decompress(self, size: int) -> bytes: """ Decompress up to size bytes. May return b"", in which case try again. Raises _EOFSuccess if EOF is reached at frame edge. Raises EOFError if EOF is reached elsewhere. """ if self._decompressor is None: # frame edge data = self._fp.read(self._read_size) if not data: # EOF raise _EOFSuccess elif self._decompressor.needs_input: data = self._fp.read(self._read_size) if not data: # EOF raise EOFError( "Compressed file ended before the end-of-stream marker was reached" ) else: data = self._decompressor.unused_data if self._decompressor.eof: # frame edge self._decompressor = None if not data: # may not be at EOF return b"" if self._decompressor is None: self._decompressor = ZstdDecompressor(self._zstd_dict, self._option) out = self._decompressor.decompress(data, size) self._pos += len(out) return out def readinto(self, b: Buffer) -> int: with memoryview(b) as view, view.cast("B") as byte_view: try: while True: if out := self._decompress(byte_view.nbytes): byte_view[: len(out)] = out return len(out) except _EOFSuccess: return 0 # If the new position is within BufferedReader's buffer, # this method may not be called. def seek(self, offset: int, whence: int = 0) -> int: # offset is absolute file position if whence == 0: # SEEK_SET pass elif whence == 1: # SEEK_CUR offset = self._pos + offset elif whence == 2: # SEEK_END offset = self._size + offset else: raise ValueError(f"Invalid value for whence: {whence}") # Get new frame index new_frame = self._seek_table.index_by_dpos(offset) # offset >= EOF if new_frame is None: self._pos = self._size self._decompressor = None self._fp.seek(self._seek_table.file_size) return self._pos # Prepare to jump old_frame = self._seek_table.index_by_dpos(self._pos) c_pos, d_pos = self._seek_table.get_frame_sizes(new_frame) # If at P1, seeking to P2 will unnecessarily read the skippable # frame. So check self._fp position to skip the skippable frame. # |--data1--|--skippable--|--data2--| # cpos: ^P1 # dpos: ^P1 ^P2 if new_frame == old_frame and offset >= self._pos and self._fp.tell() >= c_pos: pass else: # Jump self._pos = d_pos self._decompressor = None self._fp.seek(c_pos) # offset is bytes number to skip forward offset -= self._pos while offset > 0: offset -= len(self._decompress(offset)) return self._pos def get_seek_table_info(self) -> tuple[int, int, int]: return self._seek_table.get_info() # Compared to ZstdFile class, it's important to handle the seekable # of underlying file object carefully. Need to check seekable in # each situation. For example, there may be a CD-R file system that # is seekable when reading, but not seekable when appending. class SeekableZstdFile(io.BufferedIOBase): """This class can only create/write/read Zstandard Seekable Format file, or read 0-size file. It provides relatively fast seeking ability in read mode. """ # The format uses uint32_t for compressed/decompressed sizes. If flush # block a lot, compressed_size may exceed the limit, so set a max size. FRAME_MAX_C_SIZE: ClassVar[int] = 2 * 1024 * 1024 * 1024 # Zstd seekable format's example code also use 1GiB as max content size. FRAME_MAX_D_SIZE: ClassVar[int] = 1 * 1024 * 1024 * 1024 FLUSH_BLOCK: ClassVar[Literal[1]] = ZstdCompressor.FLUSH_BLOCK FLUSH_FRAME: ClassVar[Literal[2]] = ZstdCompressor.FLUSH_FRAME def __init__( self, filename: _StrOrBytesPath | BinaryIO, mode: Literal["r", "rb", "w", "wb", "a", "ab", "x", "xb"] = "r", *, level_or_option: _LevelOrOption | _Option = None, zstd_dict: _ZstdDict = None, read_size: int | _DeprecatedPlaceholder = _DEPRECATED_PLACEHOLDER, # type: ignore[has-type] write_size: int | _DeprecatedPlaceholder = _DEPRECATED_PLACEHOLDER, # type: ignore[has-type] max_frame_content_size: int = 1024 * 1024 * 1024, ) -> None: """Open a Zstandard Seekable Format file in binary mode. In read mode, the file can be 0-size file. filename can be either an actual file name (given as a str, bytes, or PathLike object), in which case the named file is opened, or it can be an existing file object to read from or write to. mode can be "r" for reading (default), "w" for (over)writing, "x" for creating exclusively, or "a" for appending. These can equivalently be given as "rb", "wb", "xb" and "ab" respectively. In append mode ("a" or "ab"), filename argument can't be a file object, please use file path. Parameters level_or_option: When it's an int object, it represents compression level. When it's a dict object, it contains advanced compression parameters. Note, in read mode (decompression), it can only be a dict object, that represents decompression option. It doesn't support int type compression level in this case. zstd_dict: A ZstdDict object, pre-trained dictionary for compression / decompression. max_frame_content_size: In write/append modes (compression), when the uncompressed data size reaches max_frame_content_size, a frame is generated automatically. If the size is small, it will increase seeking speed, but reduce compression ratio. If the size is large, it will reduce seeking speed, but increase compression ratio. You can also manually generate a frame using f.flush(f.FLUSH_FRAME). """ if read_size == _DEPRECATED_PLACEHOLDER: read_size = 131075 else: warnings.warn( "pyzstd.SeekableZstdFile()'s read_size parameter is deprecated", DeprecationWarning, stacklevel=2, ) read_size = cast("int", read_size) if write_size == _DEPRECATED_PLACEHOLDER: write_size = 131591 else: warnings.warn( "pyzstd.SeekableZstdFile()'s write_size parameter is deprecated", DeprecationWarning, stacklevel=2, ) write_size = cast("int", write_size) self._fp: BinaryIO | None = None self._close_fp = False self._mode = _MODE_CLOSED self._buffer = None if not isinstance(mode, str): raise TypeError("mode must be a str") mode = mode.removesuffix("b") # type: ignore[assignment] # handle rb, wb, xb, ab # Read or write mode if mode == "r": if not isinstance(level_or_option, (type(None), dict)): raise TypeError( "In read mode (decompression), level_or_option argument " "should be a dict object, that represents decompression " "option. It doesn't support int type compression level " "in this case." ) if read_size <= 0: raise ValueError("read_size argument should > 0") if write_size != 131591: raise ValueError("write_size argument is only valid in write modes.") # Specified max_frame_content_size argument if max_frame_content_size != 1024 * 1024 * 1024: raise ValueError( "max_frame_content_size argument is only " "valid in write modes (compression)." ) mode_code = _MODE_READ elif mode in {"w", "a", "x"}: if not isinstance(level_or_option, (type(None), int, dict)): raise TypeError( "level_or_option argument should be int or dict object." ) if read_size != 131075: raise ValueError("read_size argument is only valid in read mode.") if write_size <= 0: raise ValueError("write_size argument should > 0") if not (0 < max_frame_content_size <= self.FRAME_MAX_D_SIZE): raise ValueError( "max_frame_content_size argument should be " f"0 < value <= {self.FRAME_MAX_D_SIZE}, " f"provided value is {max_frame_content_size}." ) # For seekable format self._max_frame_content_size = max_frame_content_size self._reset_frame_sizes() self._seek_table: _SeekTable | None = _SeekTable(read_mode=False) mode_code = _MODE_WRITE self._compressor: ZstdCompressor | None = ZstdCompressor( level_or_option=level_or_option, zstd_dict=zstd_dict ) self._pos = 0 # Load seek table in append mode if mode == "a": if not isinstance(filename, (str, bytes, PathLike)): raise TypeError( "In append mode ('a', 'ab'), " "SeekableZstdFile.__init__() method can't " "accept file object as filename argument. " "Please use file path (str/bytes/PathLike)." ) # Load seek table if file exists if isfile(filename): with open(filename, "rb") as f: if not hasattr(f, "seekable") or not f.seekable(): raise TypeError( "In SeekableZstdFile's append mode " "('a', 'ab'), the opened 'rb' file " "object should be seekable." ) self._seek_table.load_seek_table(f, seek_to_0=False) else: raise ValueError(f"Invalid mode: {mode!r}") # File object if isinstance(filename, (str, bytes, PathLike)): self._fp = cast("BinaryIO", open(filename, mode + "b")) # noqa: SIM115 self._close_fp = True elif hasattr(filename, "read") or hasattr(filename, "write"): self._fp = filename else: raise TypeError("filename must be a str, bytes, file or PathLike object") self._mode = mode_code if self._mode == _MODE_READ: raw = _SeekableDecompressReader( self._fp, zstd_dict=zstd_dict, option=cast("_Option", level_or_option), # checked earlier on read_size=read_size, ) self._buffer = io.BufferedReader(raw) elif mode == "a": if self._fp.seekable(): self._fp.seek(self._seek_table.get_full_c_size()) # type: ignore[union-attr] # Necessary if the current table has many (0, 0) entries self._fp.truncate() else: # Add the seek table frame self._seek_table.append_entry(self._seek_table.seek_frame_size, 0) # type: ignore[union-attr] # Emit a warning warnings.warn( ( "SeekableZstdFile is opened in append mode " "('a', 'ab'), but the underlying file object " "is not seekable. Therefore the seek table (a " "zstd skippable frame) at the end of the file " "can't be overwritten. Each time open such file " "in append mode, it will waste some storage " f"space. {self._seek_table.seek_frame_size} bytes " # type: ignore[union-attr] "were wasted this time." ), RuntimeWarning, 2, ) def _reset_frame_sizes(self) -> None: self._current_c_size = 0 self._current_d_size = 0 self._left_d_size = self._max_frame_content_size def _check_not_closed(self) -> None: if self.closed: raise ValueError("I/O operation on closed file") def _check_can_read(self) -> None: if not self.readable(): raise io.UnsupportedOperation("File not open for reading") def _check_can_write(self) -> None: if not self.writable(): raise io.UnsupportedOperation("File not open for writing") def close(self) -> None: """Flush and close the file. May be called more than once without error. Once the file is closed, any other operation on it will raise a ValueError. """ if self._mode == _MODE_CLOSED: return if self._fp is None: return try: if self._mode == _MODE_READ: if getattr(self, "_buffer", None): self._buffer.close() # type: ignore[union-attr] self._buffer = None elif self._mode == _MODE_WRITE: self.flush(self.FLUSH_FRAME) self._seek_table.write_seek_table(self._fp) # type: ignore[union-attr] self._compressor = None finally: self._mode = _MODE_CLOSED self._seek_table = None try: if self._close_fp: self._fp.close() finally: self._fp = None self._close_fp = False def write(self, data: Buffer) -> int: """Write a bytes-like object to the file. Returns the number of uncompressed bytes written, which is always the length of data in bytes. Note that due to buffering, the file on disk may not reflect the data written until .flush() or .close() is called. """ self._check_can_write() # Accept any data that supports the buffer protocol. # And memoryview's subview is faster than slice. with memoryview(data) as view, view.cast("B") as byte_view: nbytes = byte_view.nbytes pos = 0 while nbytes > 0: # Write size write_size = min(nbytes, self._left_d_size) # Compress & write compressed = self._compressor.compress( # type: ignore[union-attr] byte_view[pos : pos + write_size] ) output_size = self._fp.write(compressed) # type: ignore[union-attr] self._pos += write_size pos += write_size nbytes -= write_size # Cumulate self._current_c_size += output_size self._current_d_size += write_size self._left_d_size -= write_size # Should flush a frame if ( self._left_d_size == 0 or self._current_c_size >= self.FRAME_MAX_C_SIZE ): self.flush(self.FLUSH_FRAME) return pos def flush(self, mode: Literal[1, 2] = ZstdCompressor.FLUSH_BLOCK) -> None: """Flush remaining data to the underlying stream. The mode argument can be ZstdFile.FLUSH_BLOCK, ZstdFile.FLUSH_FRAME. Abuse of this method will reduce compression ratio, use it only when necessary. If the program is interrupted afterwards, all data can be recovered. To ensure saving to disk, also need to use os.fsync(fd). This method does nothing in reading mode. """ if self._mode == _MODE_READ: return self._check_not_closed() if mode not in {self.FLUSH_BLOCK, self.FLUSH_FRAME}: raise ValueError( "Invalid mode argument, expected either " "ZstdFile.FLUSH_FRAME or " "ZstdFile.FLUSH_BLOCK" ) if self._compressor.last_mode != mode: # type: ignore[union-attr] # Flush zstd block/frame, and write. compressed = self._compressor.flush(mode) # type: ignore[union-attr] output_size = self._fp.write(compressed) # type: ignore[union-attr] if hasattr(self._fp, "flush"): self._fp.flush() # type: ignore[union-attr] # Cumulate self._current_c_size += output_size # self._current_d_size += 0 # self._left_d_size -= 0 if mode == self.FLUSH_FRAME and self._current_c_size != 0: # Add an entry to seek table self._seek_table.append_entry(self._current_c_size, self._current_d_size) # type: ignore[union-attr] self._reset_frame_sizes() def read(self, size: int | None = -1) -> bytes: """Read up to size uncompressed bytes from the file. If size is negative or omitted, read until EOF is reached. Returns b"" if the file is already at EOF. """ if size is None: size = -1 self._check_can_read() return self._buffer.read(size) # type: ignore[union-attr] def read1(self, size: int = -1) -> bytes: """Read up to size uncompressed bytes, while trying to avoid making multiple reads from the underlying stream. Reads up to a buffer's worth of data if size is negative. Returns b"" if the file is at EOF. """ self._check_can_read() if size < 0: size = io.DEFAULT_BUFFER_SIZE return self._buffer.read1(size) # type: ignore[union-attr] def readinto(self, b: Buffer) -> int: """Read bytes into b. Returns the number of bytes read (0 for EOF). """ self._check_can_read() return self._buffer.readinto(b) # type: ignore[union-attr] def readinto1(self, b: Buffer) -> int: """Read bytes into b, while trying to avoid making multiple reads from the underlying stream. Returns the number of bytes read (0 for EOF). """ self._check_can_read() return self._buffer.readinto1(b) # type: ignore[union-attr] def readline(self, size: int | None = -1) -> bytes: """Read a line of uncompressed bytes from the file. The terminating newline (if present) is retained. If size is non-negative, no more than size bytes will be read (in which case the line may be incomplete). Returns b'' if already at EOF. """ self._check_can_read() return self._buffer.readline(size) # type: ignore[union-attr] def seek(self, offset: int, whence: int = io.SEEK_SET) -> int: """Change the file position. The new position is specified by offset, relative to the position indicated by whence. Possible values for whence are: 0: start of stream (default): offset must not be negative 1: current stream position 2: end of stream; offset must not be positive Returns the new file position. Note that seeking is emulated, so depending on the arguments, this operation may be extremely slow. """ self._check_can_read() return self._buffer.seek(offset, whence) # type: ignore[union-attr] def peek(self, size: int = -1) -> bytes: """Return buffered data without advancing the file position. Always returns at least one byte of data, unless at EOF. The exact number of bytes returned is unspecified. """ self._check_can_read() return self._buffer.peek(size) # type: ignore[union-attr] def __iter__(self) -> Self: self._check_can_read() return self def __next__(self) -> bytes: self._check_can_read() if ret := self._buffer.readline(): # type: ignore[union-attr] return ret raise StopIteration def tell(self) -> int: """Return the current file position.""" self._check_not_closed() if self._mode == _MODE_READ: return self._buffer.tell() # type: ignore[union-attr] if self._mode == _MODE_WRITE: return self._pos raise RuntimeError # impossible code path def fileno(self) -> int: """Return the file descriptor for the underlying file.""" self._check_not_closed() return self._fp.fileno() # type: ignore[union-attr] @property def name(self) -> str: """Return the file name for the underlying file.""" self._check_not_closed() return self._fp.name # type: ignore[union-attr] @property def closed(self) -> bool: """True if this file is closed.""" return self._mode == _MODE_CLOSED def writable(self) -> bool: """Return whether the file was opened for writing.""" self._check_not_closed() return self._mode == _MODE_WRITE def readable(self) -> bool: """Return whether the file was opened for reading.""" self._check_not_closed() return self._mode == _MODE_READ def seekable(self) -> bool: """Return whether the file supports seeking.""" return self.readable() and self._buffer.seekable() # type: ignore[union-attr] @property def seek_table_info(self) -> tuple[int, int, int] | None: """A tuple: (frames_number, compressed_size, decompressed_size) 1, Frames_number and compressed_size don't count the seek table frame (a zstd skippable frame at the end of the file). 2, In write modes, the part of data that has not been flushed to frames is not counted. 3, If the SeekableZstdFile object is closed, it's None. """ if self._mode == _MODE_WRITE: return self._seek_table.get_info() # type: ignore[union-attr] if self._mode == _MODE_READ: return self._buffer.raw.get_seek_table_info() # type: ignore[union-attr] return None @staticmethod def is_seekable_format_file(filename: _StrOrBytesPath | BinaryIO) -> bool: """Check if a file is Zstandard Seekable Format file or 0-size file. It parses the seek table at the end of the file, returns True if no format error. filename can be either a file path (str/bytes/PathLike), or can be an existing file object in reading mode. """ # Check argument if isinstance(filename, (str, bytes, PathLike)): fp: BinaryIO = open(filename, "rb") # noqa: SIM115 is_file_path = True elif ( hasattr(filename, "readable") and filename.readable() and hasattr(filename, "seekable") and filename.seekable() ): fp = filename is_file_path = False orig_pos = fp.tell() else: raise TypeError( "filename argument should be a str/bytes/PathLike object, " "or a file object that is readable and seekable." ) # Write mode uses less RAM table = _SeekTable(read_mode=False) try: # Read/Parse the seek table table.load_seek_table(fp, seek_to_0=False) except SeekableFormatError: ret = False else: ret = True finally: if is_file_path: fp.close() else: fp.seek(orig_pos) return ret pyzstd-0.19.1/src/pyzstd/_version.py0000644000000000000000000000002713615410400014400 0ustar00__version__ = "0.19.1" pyzstd-0.19.1/src/pyzstd/py.typed0000644000000000000000000000000013615410400013670 0ustar00pyzstd-0.19.1/tests/__init__.py0000644000000000000000000000000013615410400013320 0ustar00pyzstd-0.19.1/tests/test_seekable.py0000644000000000000000000017722113615410400014417 0ustar00from contextlib import contextmanager import array import gc import io import os import pathlib import random import sys import tempfile import unittest import warnings from io import BytesIO from math import ceil from unittest.mock import patch from pyzstd import ( compress, CParameter, decompress, DParameter, get_frame_size, SeekableZstdFile, SeekableFormatError, ZstdCompressor, ZstdDict, ZstdError, ZstdFile ) from pyzstd._seekable_zstdfile import _SeekTable @contextmanager def _check_deprecated(testcase): with warnings.catch_warnings(record=True) as warns: yield testcase.assertEqual(len(warns), 1) warn = warns[0] testcase.assertEqual(warn.category, DeprecationWarning) testcase.assertIn( str(warn.message), [ "pyzstd.SeekableZstdFile()'s read_size parameter is deprecated", "pyzstd.SeekableZstdFile()'s write_size parameter is deprecated", ] ) DECOMPRESSED = b'1234567890' assert len(DECOMPRESSED) == 10 COMPRESSED = compress(DECOMPRESSED) DICT = ZstdDict(b'a'*1024, is_raw=True) class SeekTableCase(unittest.TestCase): def create_table(self, sizes_lst, read_mode=True): table = _SeekTable(read_mode=read_mode) for item in sizes_lst: table.append_entry(*item) return table def test_array_append(self): # test array('I') t = _SeekTable(read_mode=False) t.append_entry(0xFFFFFFFF, 0) with self.assertRaises(ValueError): # impossible frame t.append_entry(0, 0xFFFFFFFF) with self.assertRaises(OverflowError): t.append_entry(0xFFFFFFFF+1, 123) with self.assertRaises(OverflowError): t.append_entry(123, 0xFFFFFFFF+1) with self.assertRaises(OverflowError): t.append_entry(-1, 123) with self.assertRaises(OverflowError): t.append_entry(123, -1) # test array('q') t = _SeekTable(read_mode=True) t.append_entry(-2**63, 2**63-1) self.assertEqual(t._cumulated_c_size[1], -2**63) self.assertEqual(t._cumulated_d_size[1], 2**63-1) with self.assertRaises(OverflowError): t.append_entry(-2**63-1, 0) with self.assertRaises(OverflowError): t.append_entry(2**63, 0) with self.assertRaises((OverflowError, ValueError)): t.append_entry(0, -2**63-1) with self.assertRaises((OverflowError, ValueError)): t.append_entry(0, 2**63) def test_case1(self): lst = [(9, 10), (9, 10), (9, 10)] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, len(lst)) self.assertEqual(list(t._cumulated_c_size), [0, 9, 18, 27]) self.assertEqual(list(t._cumulated_d_size), [0, 10, 20, 30]) self.assertEqual(t.get_full_c_size(), 27) self.assertEqual(t.get_full_d_size(), 30) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(2), (9, 10)) self.assertEqual(t.get_frame_sizes(3), (18, 20)) # find frame index self.assertEqual(t.index_by_dpos(-1), 1) self.assertEqual(t.index_by_dpos(0), 1) self.assertEqual(t.index_by_dpos(1), 1) self.assertEqual(t.index_by_dpos(9), 1) self.assertEqual(t.index_by_dpos(10), 2) self.assertEqual(t.index_by_dpos(11), 2) self.assertEqual(t.index_by_dpos(19), 2) self.assertEqual(t.index_by_dpos(20), 3) self.assertEqual(t.index_by_dpos(21), 3) self.assertEqual(t.index_by_dpos(29), 3) self.assertEqual(t.index_by_dpos(30), None) self.assertEqual(t.index_by_dpos(31), None) def test_add_00_entry(self): # don't add (0, 0) entry to internal table lst = [(9, 10), (0, 0), (0, 0), (9, 10)] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, 2) self.assertEqual(list(t._cumulated_c_size), [0, 9, 18]) self.assertEqual(list(t._cumulated_d_size), [0, 10, 20]) self.assertEqual(t.get_full_c_size(), 18) self.assertEqual(t.get_full_d_size(), 20) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(2), (9, 10)) # find frame index self.assertEqual(t.index_by_dpos(-1), 1) self.assertEqual(t.index_by_dpos(0), 1) self.assertEqual(t.index_by_dpos(1), 1) self.assertEqual(t.index_by_dpos(9), 1) self.assertEqual(t.index_by_dpos(10), 2) self.assertEqual(t.index_by_dpos(11), 2) self.assertEqual(t.index_by_dpos(19), 2) self.assertEqual(t.index_by_dpos(20), None) self.assertEqual(t.index_by_dpos(21), None) def test_case_empty(self): # empty lst = [] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, 0) self.assertEqual(list(t._cumulated_c_size), [0]) self.assertEqual(list(t._cumulated_d_size), [0]) self.assertEqual(t.get_full_c_size(), 0) self.assertEqual(t.get_full_d_size(), 0) self.assertEqual(t.get_frame_sizes(1), (0, 0)) # find frame index self.assertEqual(t.index_by_dpos(-1), None) self.assertEqual(t.index_by_dpos(0), None) self.assertEqual(t.index_by_dpos(1), None) def test_case_0_decompressed_size(self): # 0 d_size lst = [(9, 10), (9, 0), (9, 10)] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, len(lst)) self.assertEqual(list(t._cumulated_c_size), [0, 9, 18, 27]) self.assertEqual(list(t._cumulated_d_size), [0, 10, 10, 20]) self.assertEqual(t.get_full_c_size(), 27) self.assertEqual(t.get_full_d_size(), 20) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(2), (9, 10)) self.assertEqual(t.get_frame_sizes(3), (18, 10)) # find frame index self.assertEqual(t.index_by_dpos(9), 1) self.assertEqual(t.index_by_dpos(10), 3) self.assertEqual(t.index_by_dpos(11), 3) self.assertEqual(t.index_by_dpos(19), 3) self.assertEqual(t.index_by_dpos(20), None) self.assertEqual(t.index_by_dpos(21), None) def test_case_0_size_middle(self): # 0 size lst = [(9, 10), (9, 0), (9, 0), (9, 10)] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, len(lst)) self.assertEqual(list(t._cumulated_c_size), [0, 9, 18, 27, 36]) self.assertEqual(list(t._cumulated_d_size), [0, 10, 10, 10, 20]) self.assertEqual(t.get_full_c_size(), 36) self.assertEqual(t.get_full_d_size(), 20) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(2), (9, 10)) self.assertEqual(t.get_frame_sizes(4), (27, 10)) # find frame index self.assertEqual(t.index_by_dpos(9), 1) self.assertEqual(t.index_by_dpos(10), 4) self.assertEqual(t.index_by_dpos(11), 4) self.assertEqual(t.index_by_dpos(19), 4) self.assertEqual(t.index_by_dpos(20), None) self.assertEqual(t.index_by_dpos(21), None) def test_case_0_size_at_begin(self): # 0 size at begin lst = [(9, 0), (9, 0), (9, 10), (9, 10)] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, len(lst)) self.assertEqual(list(t._cumulated_c_size), [0, 9, 18, 27, 36]) self.assertEqual(list(t._cumulated_d_size), [0, 0, 0, 10, 20]) self.assertEqual(t.get_full_c_size(), 36) self.assertEqual(t.get_full_d_size(), 20) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(2), (9, 0)) self.assertEqual(t.get_frame_sizes(3), (18, 0)) self.assertEqual(t.get_frame_sizes(4), (27, 10)) # find frame index self.assertEqual(t.index_by_dpos(-1), 3) self.assertEqual(t.index_by_dpos(0), 3) self.assertEqual(t.index_by_dpos(1), 3) self.assertEqual(t.index_by_dpos(9), 3) self.assertEqual(t.index_by_dpos(10), 4) self.assertEqual(t.index_by_dpos(11), 4) self.assertEqual(t.index_by_dpos(19), 4) self.assertEqual(t.index_by_dpos(20), None) self.assertEqual(t.index_by_dpos(21), None) def test_case_0_size_at_end(self): # 0 size at end lst = [(9, 10), (9, 10), (9, 0), (9, 0)] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, len(lst)) self.assertEqual(list(t._cumulated_c_size), [0, 9, 18, 27, 36]) self.assertEqual(list(t._cumulated_d_size), [0, 10, 20, 20, 20]) self.assertEqual(t.get_full_c_size(), 36) self.assertEqual(t.get_full_d_size(), 20) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(2), (9, 10)) self.assertEqual(t.get_frame_sizes(3), (18, 20)) self.assertEqual(t.get_frame_sizes(4), (27, 20)) # find frame index self.assertEqual(t.index_by_dpos(9), 1) self.assertEqual(t.index_by_dpos(10), 2) self.assertEqual(t.index_by_dpos(11), 2) self.assertEqual(t.index_by_dpos(19), 2) self.assertEqual(t.index_by_dpos(20), None) self.assertEqual(t.index_by_dpos(21), None) def test_case_0_size_all(self): # 0 size frames lst = [(1, 0), (1, 0), (1, 0)] t = self.create_table(lst) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, len(lst)) self.assertEqual(list(t._cumulated_c_size), [0, 1, 2, 3]) self.assertEqual(list(t._cumulated_d_size), [0, 0, 0, 0]) self.assertEqual(t.get_full_c_size(), 3) self.assertEqual(t.get_full_d_size(), 0) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(2), (1, 0)) self.assertEqual(t.get_frame_sizes(3), (2, 0)) # find frame index self.assertEqual(t.index_by_dpos(-1), None) self.assertEqual(t.index_by_dpos(0), None) self.assertEqual(t.index_by_dpos(1), None) def test_merge_frames1(self): lst = [(9, 10), (9, 10), (9, 10), (9, 10), (9, 10)] t = self.create_table(lst, read_mode=False) t._merge_frames(1) self.assertEqual(len(t), 1) self.assertEqual(list(t._frames), [45, 50]) t = self.create_table(lst, read_mode=False) t._merge_frames(2) self.assertEqual(len(t), 2) self.assertEqual(list(t._frames), [27, 30, 18, 20]) t = self.create_table(lst, read_mode=False) t._merge_frames(3) self.assertEqual(len(t), 3) self.assertEqual(list(t._frames), [18, 20, 18, 20, 9, 10]) t = self.create_table(lst, read_mode=False) t._merge_frames(4) self.assertEqual(len(t), 4) self.assertEqual(list(t._frames), [18, 20, 9, 10, 9, 10, 9, 10]) def test_merge_frames2(self): lst = [(9, 10), (9, 10), (9, 10), (9, 10), (9, 10), (9, 10)] t = self.create_table(lst, read_mode=False) t._merge_frames(1) self.assertEqual(len(t), 1) self.assertEqual(list(t._frames), [54, 60]) t = self.create_table(lst, read_mode=False) t._merge_frames(2) self.assertEqual(len(t), 2) self.assertEqual(list(t._frames), [27, 30, 27, 30]) t = self.create_table(lst, read_mode=False) t._merge_frames(3) self.assertEqual(len(t), 3) self.assertEqual(list(t._frames), [18, 20, 18, 20, 18, 20]) t = self.create_table(lst, read_mode=False) t._merge_frames(4) self.assertEqual(len(t), 4) self.assertEqual(list(t._frames), [18, 20, 18, 20, 9, 10, 9, 10]) t = self.create_table(lst, read_mode=False) t._merge_frames(5) self.assertEqual(len(t), 5) self.assertEqual(list(t._frames), [18, 20, 9, 10, 9, 10, 9, 10, 9, 10]) def test_load_empty(self): # empty b = BytesIO() t = _SeekTable(read_mode=True) t.load_seek_table(b, seek_to_0=True) self.assertEqual(len(t), 0) self.assertEqual(b.tell(), 0) def test_save_load(self): # save CSIZE = len(COMPRESSED) DSIZE = len(DECOMPRESSED) lst = [(CSIZE, DSIZE)] * 3 t = self.create_table(lst, read_mode=False) b = BytesIO() b.write(COMPRESSED*3) t.write_seek_table(b) b.seek(0) # load, seek_to_0=True t = _SeekTable(read_mode=True) t.load_seek_table(b, seek_to_0=True) self.assertEqual(b.tell(), 0) with self.assertRaises(AttributeError): t._frames self.assertEqual(t._frames_count, len(lst)) self.assertEqual(list(t._cumulated_c_size), [0, CSIZE, 2*CSIZE, 3*CSIZE]) self.assertEqual(list(t._cumulated_d_size), [0, DSIZE, 2*DSIZE, 3*DSIZE]) self.assertEqual(t.get_full_c_size(), 3*CSIZE) self.assertEqual(t.get_full_d_size(), 3*DSIZE) self.assertEqual(t.get_frame_sizes(1), (0, 0)) self.assertEqual(t.get_frame_sizes(3), (2*CSIZE, 2*DSIZE)) # load, seek_to_0=False t = _SeekTable(read_mode=True) t.load_seek_table(b, seek_to_0=False) self.assertEqual(b.tell(), len(b.getvalue())) def test_load_has_checksum(self): b = BytesIO() b.write(COMPRESSED) b.write(COMPRESSED) b.write(_SeekTable._s_2uint32.pack(0x184D2A5E, 9+2*(4+4+4))) b.write(_SeekTable._s_3uint32.pack(len(COMPRESSED), len(DECOMPRESSED), 123)) b.write(_SeekTable._s_3uint32.pack(len(COMPRESSED), len(DECOMPRESSED), 456)) b.write(_SeekTable._s_footer.pack(2, 0b10000000, 0x8F92EAB1)) t = _SeekTable(read_mode=True) t.load_seek_table(b, seek_to_0=False) self.assertTrue(t._has_checksum) self.assertEqual(len(t), 2) def test_load_bad1(self): # 0 < length < 17 b = BytesIO(b'len<17') t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, (r'^Zstandard Seekable Format error: ' r'File size is less than')): t.load_seek_table(b, seek_to_0=True) # wrong Seekable_Magic_Number b = BytesIO() b.write(b'a'*18) b.write(_SeekTable._s_3uint32.pack(1, 0, 0x8F92EAB2)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'Format Magic Number'): t.load_seek_table(b, seek_to_0=True) # wrong Seek_Table_Descriptor b = BytesIO() b.write(b'a'*18) b.write(_SeekTable._s_footer.pack(1, 0b00010000, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'Reserved_Bits'): t.load_seek_table(b, seek_to_0=True) # wrong expected size b = BytesIO() b.write(b'a'*18) b.write(_SeekTable._s_footer.pack(100, 0b10000000, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'less than expected size'): t.load_seek_table(b, seek_to_0=True) # wrong Magic_Number b = BytesIO() b.write(b'a'*18) b.write(_SeekTable._s_2uint32.pack(0x184D2A5F, 9)) b.write(_SeekTable._s_footer.pack(0, 0b10000000, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'Magic_Number'): t.load_seek_table(b, seek_to_0=True) # wrong Frame_Size b = BytesIO() b.write(b'a'*18) b.write(_SeekTable._s_2uint32.pack(0x184D2A5E, 10)) b.write(_SeekTable._s_footer.pack(0, 0b10000000, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'Frame_Size'): t.load_seek_table(b, seek_to_0=True) def test_load_bad2(self): # wrong Frame_Size b = BytesIO() b.write(COMPRESSED) b.write(_SeekTable._s_2uint32.pack(0x184D2A5E, 9+8)) b.write(_SeekTable._s_2uint32.pack(0, len(DECOMPRESSED))) b.write(_SeekTable._s_footer.pack(1, 0, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'impossible'): t.load_seek_table(b, seek_to_0=True) # cumulated compressed size 1 b = BytesIO() b.write(COMPRESSED) b.write(_SeekTable._s_2uint32.pack(0x184D2A5E, 9+8)) b.write(_SeekTable._s_2uint32.pack(200, len(DECOMPRESSED))) b.write(_SeekTable._s_footer.pack(1, 0, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'cumulated compressed size'): t.load_seek_table(b, seek_to_0=True) # cumulated compressed size 2 b = BytesIO() b.write(COMPRESSED) b.write(COMPRESSED) b.write(_SeekTable._s_2uint32.pack(0x184D2A5E, 9+2*8)) b.write(_SeekTable._s_2uint32.pack(len(COMPRESSED)+1, len(DECOMPRESSED))) b.write(_SeekTable._s_2uint32.pack(len(COMPRESSED)+1, len(DECOMPRESSED))) b.write(_SeekTable._s_footer.pack(2, 0, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'cumulated compressed size'): t.load_seek_table(b, seek_to_0=True) # cumulated compressed size 3 b = BytesIO() b.write(COMPRESSED) b.write(COMPRESSED) b.write(_SeekTable._s_2uint32.pack(0x184D2A5E, 9+2*8)) b.write(_SeekTable._s_2uint32.pack(len(COMPRESSED)-1, len(DECOMPRESSED))) b.write(_SeekTable._s_2uint32.pack(len(COMPRESSED)-1, len(DECOMPRESSED))) b.write(_SeekTable._s_footer.pack(2, 0, 0x8F92EAB1)) b.seek(0) t = _SeekTable(read_mode=True) with self.assertRaisesRegex(SeekableFormatError, 'cumulated compressed size'): t.load_seek_table(b, seek_to_0=True) def test_write_table(self): class MockError(Exception): pass class Mock: def __len__(self): return 0xFFFFFFFF + 1 def __getitem__(self, key): raise MockError t = self.create_table([]) t._frames = Mock() try: with self.assertWarnsRegex(RuntimeWarning, '4294967296 entries'): t.write_seek_table(BytesIO()) except MockError: pass else: self.assertTrue(False, 'impossible code path') class SeekableZstdFileCase(unittest.TestCase): @classmethod def setUpClass(cls): b = BytesIO() with SeekableZstdFile(b, 'w') as f: pass cls.zero_frame = b.getvalue() b = BytesIO() with SeekableZstdFile(b, 'w') as f: f.write(DECOMPRESSED) cls.one_frame = b.getvalue() b = BytesIO() with SeekableZstdFile(b, 'w') as f: f.write(DECOMPRESSED) f.flush(f.FLUSH_FRAME) f.write(DECOMPRESSED) cls.two_frames = b.getvalue() @staticmethod def get_decompressed_sizes_list(dat): pos = 0 lst = [] while pos < len(dat): frame_len = get_frame_size(dat[pos:]) size = len(decompress(dat[pos:pos+frame_len])) lst.append(size) pos += frame_len return lst def test_class_shape(self): self.assertEqual(SeekableZstdFile.FLUSH_BLOCK, ZstdCompressor.FLUSH_BLOCK) self.assertEqual(SeekableZstdFile.FLUSH_FRAME, ZstdCompressor.FLUSH_FRAME) with self.assertRaises(AttributeError): SeekableZstdFile.CONTINUE self.assertEqual(SeekableZstdFile.FRAME_MAX_C_SIZE, 2*1024*1024*1024) self.assertEqual(SeekableZstdFile.FRAME_MAX_D_SIZE, 1*1024*1024*1024) with SeekableZstdFile(BytesIO(self.two_frames), 'r') as f: self.assertEqual(f.seek_table_info, (2, len(self.two_frames)-(17+2*8), len(DECOMPRESSED)*2)) with SeekableZstdFile(BytesIO(self.two_frames), 'w') as f: self.assertEqual(f.write(DECOMPRESSED), len(DECOMPRESSED)) self.assertEqual(f.flush(f.FLUSH_FRAME), None) self.assertEqual(f.write(DECOMPRESSED), len(DECOMPRESSED)) self.assertEqual(f.flush(f.FLUSH_FRAME), None) self.assertEqual(f.seek_table_info, (2, f._fp.tell(), len(DECOMPRESSED)*2)) def test_init(self): with SeekableZstdFile(BytesIO(self.two_frames)) as f: pass with SeekableZstdFile(BytesIO(), "w") as f: pass with SeekableZstdFile(BytesIO(), "x") as f: pass with self.assertRaisesRegex(TypeError, 'file path'): with SeekableZstdFile(BytesIO(), "a") as f: pass with SeekableZstdFile(BytesIO(), "w", level_or_option=12) as f: pass with SeekableZstdFile(BytesIO(), "w", level_or_option={CParameter.checksumFlag:1}) as f: pass with SeekableZstdFile(BytesIO(), "w", level_or_option={}) as f: pass with SeekableZstdFile(BytesIO(), "w", level_or_option=20, zstd_dict=DICT) as f: pass with SeekableZstdFile(BytesIO(), "r", level_or_option={DParameter.windowLogMax:25}) as f: pass with SeekableZstdFile(BytesIO(), "r", level_or_option={}, zstd_dict=DICT) as f: pass def test_init_with_PathLike_filename(self): with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name with SeekableZstdFile(filename, "a") as f: f.write(DECOMPRESSED) with SeekableZstdFile(filename) as f: self.assertEqual(f.read(), DECOMPRESSED) with SeekableZstdFile(filename, "a") as f: f.write(DECOMPRESSED) with SeekableZstdFile(filename) as f: self.assertEqual(f.read(), DECOMPRESSED * 2) os.remove(filename) def test_init_with_filename(self): with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name with SeekableZstdFile(filename) as f: pass with SeekableZstdFile(filename, "w") as f: pass with SeekableZstdFile(filename, "a") as f: pass os.remove(filename) def test_init_mode(self): bi = BytesIO() with SeekableZstdFile(bi, "r"): pass with SeekableZstdFile(bi, "rb"): pass with SeekableZstdFile(bi, "w"): pass with SeekableZstdFile(bi, "wb"): pass with self.assertRaisesRegex(TypeError, 'file path'): SeekableZstdFile(bi, "a") with self.assertRaisesRegex(TypeError, 'file path'): SeekableZstdFile(bi, "ab") def test_init_with_x_mode(self): with tempfile.NamedTemporaryFile() as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name for mode in ("x", "xb"): with SeekableZstdFile(filename, mode): pass with self.assertRaises(FileExistsError): with SeekableZstdFile(filename, mode): pass os.remove(filename) def test_init_bad_mode(self): with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(COMPRESSED), (3, "x")) with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "xt") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "x+") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "rx") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "wx") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "rt") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "r+") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "wt") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "w+") with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(COMPRESSED), "rw") with self.assertRaisesRegex(TypeError, r"NOT be CParameter"): SeekableZstdFile(BytesIO(), 'rb', level_or_option={CParameter.compressionLevel:5}) with self.assertRaisesRegex(TypeError, r"NOT be DParameter"): SeekableZstdFile(BytesIO(), 'wb', level_or_option={DParameter.windowLogMax:21}) with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(COMPRESSED), "r", level_or_option=12) def test_init_bad_check(self): with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(), "w", level_or_option='asd') # CHECK_UNKNOWN and anything above CHECK_ID_MAX should be invalid. with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(), "w", level_or_option={999:9999}) with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(), "w", level_or_option={CParameter.windowLog:99}) with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(self.two_frames), "r", level_or_option=33) with self.assertRaises(OverflowError): SeekableZstdFile(BytesIO(self.two_frames), level_or_option={DParameter.windowLogMax:2**31}) with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(self.two_frames), level_or_option={444:333}) with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(self.two_frames), zstd_dict={1:2}) with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(self.two_frames), zstd_dict=b'dict123456') def test_init_argument(self): # not readable class C: def readable(self): return False def seekable(self): return True def read(self, size=-1): return b'' obj = C() with self.assertRaisesRegex(TypeError, 'readable'): SeekableZstdFile(obj, 'r') # not seekable class C: def readable(self): return True def seekable(self): return False def read(self, size=-1): return b'' obj = C() with self.assertRaisesRegex(TypeError, 'seekable'): SeekableZstdFile(obj, 'r') # append mode b = BytesIO(self.two_frames) with self.assertRaisesRegex(TypeError, "can't accept file object"): SeekableZstdFile(b, 'ab') # specify max_frame_content_size in reading mode with self.assertRaisesRegex(ValueError, 'only valid in write modes'): SeekableZstdFile(b, 'r', max_frame_content_size=100) def test_init_sizes_arg(self): with _check_deprecated(self): with SeekableZstdFile(BytesIO(), 'r', read_size=1): pass with _check_deprecated(self): with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(), 'r', read_size=0) with _check_deprecated(self): with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(), 'r', read_size=-1) with _check_deprecated(self): with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(), 'r', read_size=(10,)) with _check_deprecated(self): with self.assertRaisesRegex(ValueError, 'read_size'): SeekableZstdFile(BytesIO(), 'w', read_size=10) with _check_deprecated(self): with SeekableZstdFile(BytesIO(), 'w', write_size=1): pass with _check_deprecated(self): with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(), 'w', write_size=0) with _check_deprecated(self): with self.assertRaises(ValueError): SeekableZstdFile(BytesIO(), 'w', write_size=-1) with _check_deprecated(self): with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(), 'w', write_size=(10,)) with _check_deprecated(self): with self.assertRaisesRegex(ValueError, 'write_size'): SeekableZstdFile(BytesIO(), 'r', write_size=10) def test_init_append_fail(self): # get a temp file name with tempfile.NamedTemporaryFile(delete=False) as tmp_f: tmp_f.write(self.two_frames) filename = tmp_f.name # mock io.open, .seek() raises OSError. def mock_open(io_open): def get_file(*args, **kwargs): f = io_open(*args, **kwargs) if len(args) > 1 and args[1] == 'ab': def seekable(): return True def seek(offset, whence=0): assert offset > 0 assert whence == 0 raise OSError("xyz") f.seekable = seekable f.seek = seek return f return get_file # test .close() method with patch("builtins.open", mock_open(io.open)): with self.assertRaisesRegex(OSError, 'xyz'): SeekableZstdFile(filename, 'ab') # for PyPy gc.collect() os.remove(filename) def test_load(self): # empty b = BytesIO() with SeekableZstdFile(b, 'r') as f: with self.assertRaises(EOFError): f.read(10) # not a seekable format b = BytesIO(COMPRESSED*10) with self.assertRaisesRegex(SeekableFormatError, 'Format Magic Number'): SeekableZstdFile(b, 'r') def test_read(self): with SeekableZstdFile(BytesIO(self.zero_frame), 'r') as f: self.assertEqual(f.read(), b'') with SeekableZstdFile(BytesIO(self.one_frame), 'r') as f: self.assertEqual(f.read(), DECOMPRESSED) with SeekableZstdFile(BytesIO(self.two_frames), 'r') as f: self.assertEqual(f.read(), DECOMPRESSED*2) # bad file with self.assertRaisesRegex(SeekableFormatError, 'size is less than'): SeekableZstdFile(BytesIO(b'1'), 'r') with self.assertRaisesRegex(SeekableFormatError, 'The last 4 bytes'): SeekableZstdFile(BytesIO(COMPRESSED*30), 'r') # write mode with SeekableZstdFile(BytesIO(), 'w') as f: f.write(DECOMPRESSED) with self.assertRaisesRegex(io.UnsupportedOperation, "File not open for reading"): f.read(100) # closed with self.assertRaisesRegex(ValueError, "I/O operation on closed file"): f.read(100) def test_read_empty(self): with SeekableZstdFile(BytesIO(b''), 'r') as f: with self.assertRaises(EOFError): f.read() self.assertEqual(f.tell(), 0) def test_seek(self): with SeekableZstdFile(BytesIO(self.two_frames), 'r') as f: # get d size self.assertEqual(f.seek(0, io.SEEK_END), len(DECOMPRESSED)*2) self.assertEqual(f.tell(), len(DECOMPRESSED)*2) self.assertEqual(f.seek(1), 1) self.assertEqual(f.tell(), 1) self.assertEqual(f.read(), DECOMPRESSED[1:]+DECOMPRESSED) self.assertEqual(f.seek(-1), 0) self.assertEqual(f.tell(), 0) self.assertEqual(f.read(), DECOMPRESSED*2) self.assertEqual(f.seek(9), 9) self.assertEqual(f.tell(), 9) self.assertEqual(f.read(), DECOMPRESSED[9:]+DECOMPRESSED) self.assertEqual(f.seek(21), 20) self.assertEqual(f.tell(), 20) self.assertEqual(f.read(), b'') self.assertEqual(f.seek(0), 0) self.assertEqual(f.tell(), 0) self.assertEqual(f.read(), DECOMPRESSED*2) self.assertEqual(f.seek(20), 20) self.assertEqual(f.tell(), 20) self.assertEqual(f.read(), b'') def test_read_not_seekable(self): class C: def readable(self): return True def seekable(self): return False def read(self, size=-1): return b'' obj = C() with self.assertRaisesRegex(TypeError, 'using ZstdFile class'): SeekableZstdFile(obj, 'r') def test_read_fp_not_at_0(self): b = BytesIO(self.two_frames) b.seek(3) # it will seek b to 0 with SeekableZstdFile(b, 'r') as f: self.assertEqual(b.tell(), 0) self.assertEqual(f.read(), DECOMPRESSED*2) def test_write(self): DSIZE = len(DECOMPRESSED) # write b = BytesIO() with SeekableZstdFile(b, 'w') as f: self.assertEqual(f.write(DECOMPRESSED), DSIZE) self.assertEqual(f.tell(), DSIZE) self.assertIsNone(f.flush(f.FLUSH_BLOCK)) self.assertEqual(f.tell(), DSIZE) self.assertEqual(f.seek_table_info, (0, 0, 0)) self.assertIsNone(f.flush(f.FLUSH_FRAME)) self.assertEqual(f.tell(), DSIZE) fp_pos = f._fp.tell() self.assertEqual(f.seek_table_info, (1, fp_pos, DSIZE)) self.assertEqual(f.write(b'xyz'), 3) self.assertEqual(f.tell(), DSIZE+3) self.assertEqual(f.seek_table_info, (1, fp_pos, DSIZE)) f.flush(f.FLUSH_FRAME) self.assertEqual(f.tell(), DSIZE+3) self.assertEqual(f.seek_table_info, (2, f._fp.tell(), DSIZE+3)) dat = b.getvalue() lst = self.get_decompressed_sizes_list(dat) self.assertEqual(lst, [10, 3, 0]) # read mode with SeekableZstdFile(BytesIO(self.two_frames), 'r') as f: with self.assertRaisesRegex(io.UnsupportedOperation, 'File not open for writing'): f.write(b'1234') # closed file with self.assertRaisesRegex(ValueError, 'I/O operation on closed file'): f.write(b'1234') # read b.seek(0) with SeekableZstdFile(b, 'r') as f: self.assertEqual(f.read(), DECOMPRESSED + b'xyz') self.assertEqual(len(f._buffer.raw._seek_table), 2) with self.assertRaisesRegex(io.UnsupportedOperation, 'File not open for writing'): f.write(b'1234') def test_write_chunks(self): CHUNK_SIZE = 100 b1 = BytesIO() b1.write(b'a' * CHUNK_SIZE) b1.write(b'b' * CHUNK_SIZE) b1.write(b'c' * CHUNK_SIZE) b1.write(b'd' * CHUNK_SIZE) b1.seek(0) b2 = BytesIO() with SeekableZstdFile(b2, 'w', max_frame_content_size=CHUNK_SIZE) as f: dat = b1.read(CHUNK_SIZE) self.assertEqual(f.write(dat), CHUNK_SIZE) self.assertEqual(f.tell(), CHUNK_SIZE) dat = b1.read(CHUNK_SIZE-1) self.assertEqual(f.write(dat), CHUNK_SIZE-1) self.assertEqual(f.tell(), 2*CHUNK_SIZE-1) dat = b1.read(CHUNK_SIZE+1) self.assertEqual(f.write(dat), CHUNK_SIZE+1) self.assertEqual(f.tell(), 3*CHUNK_SIZE) dat = b1.read(CHUNK_SIZE) self.assertEqual(f.write(dat), CHUNK_SIZE) self.assertEqual(f.tell(), 4*CHUNK_SIZE) self.assertEqual(decompress(b2.getvalue()), b1.getvalue()) def test_write_arg(self): b = BytesIO() with SeekableZstdFile(b, 'w') as f: f.write(DECOMPRESSED) f.write(data=b'123') with self.assertRaises(TypeError): f.write() with self.assertRaises(TypeError): f.write(0) with self.assertRaises(TypeError): f.write('123') with self.assertRaises(TypeError): f.write(b'123', f.FLUSH_BLOCK) with self.assertRaises(TypeError): f.write(dat=b'123') def test_write_empty_frame(self): bo = BytesIO() with SeekableZstdFile(bo, 'w') as f: f.flush(f.FLUSH_FRAME) self.assertEqual(f.tell(), 0) # 17 is a seek table without entry, 4+4+9 self.assertEqual(len(bo.getvalue()), 17) bo = BytesIO() with SeekableZstdFile(bo, 'w') as f: f.flush(f.FLUSH_FRAME) self.assertEqual(f.tell(), 0) f.flush(f.FLUSH_FRAME) self.assertEqual(f.tell(), 0) # 17 is a seek table without entry, 4+4+9 self.assertEqual(len(bo.getvalue()), 17) # if .write(b''), generate empty content frame bo = BytesIO() with SeekableZstdFile(bo, 'w') as f: f.write(b'') self.assertEqual(f.tell(), 0) # SeekableZstdFile.write() do nothing if length is 0 self.assertEqual(len(bo.getvalue()), 17) # has an empty content frame bo = BytesIO() with SeekableZstdFile(bo, 'w') as f: f.flush(f.FLUSH_BLOCK) self.assertEqual(f.tell(), 0) self.assertGreater(len(bo.getvalue()), 17) def test_write_empty_block(self): # If no internal data, .FLUSH_BLOCK return b''. c = ZstdCompressor() self.assertEqual(c.flush(c.FLUSH_BLOCK), b'') self.assertNotEqual(c.compress(b'123', c.FLUSH_BLOCK), b'') self.assertEqual(c.flush(c.FLUSH_BLOCK), b'') self.assertEqual(c.compress(b''), b'') self.assertEqual(c.compress(b''), b'') self.assertEqual(c.flush(c.FLUSH_BLOCK), b'') # mode = .last_mode bo = BytesIO() with SeekableZstdFile(bo, 'w') as f: f.write(b'123') self.assertEqual(f.tell(), 3) f.flush(f.FLUSH_BLOCK) self.assertEqual(f.tell(), 3) fp_pos = f._fp.tell() self.assertNotEqual(fp_pos, 0) f.flush(f.FLUSH_BLOCK) self.assertEqual(f.tell(), 3) self.assertEqual(f._fp.tell(), fp_pos) # mode != .last_mode bo = BytesIO() with SeekableZstdFile(bo, 'w') as f: f.flush(f.FLUSH_BLOCK) self.assertEqual(f.tell(), 0) self.assertEqual(f._fp.tell(), 0) f.write(b'') self.assertEqual(f.tell(), 0) f.flush(f.FLUSH_BLOCK) self.assertEqual(f.tell(), 0) self.assertEqual(f._fp.tell(), 0) def test_write_buffer_protocol(self): # don't use len() for buffer protocol objects arr = array.array("I", range(1000)) LENGTH = len(arr) * arr.itemsize # write b = BytesIO() with SeekableZstdFile(b, "wb", max_frame_content_size=33) as f: self.assertEqual(f.write(arr), LENGTH) self.assertEqual(f.tell(), LENGTH) f.flush(f.FLUSH_FRAME) self.assertEqual(f.seek_table_info, (ceil(LENGTH/33), f._fp.tell(), ceil(LENGTH))) # verify with SeekableZstdFile(b, "rb") as f: dat = f.read() self.assertEqual(dat, arr.tobytes()) def test_flush(self): DSIZE = len(DECOMPRESSED) b = BytesIO() with SeekableZstdFile(b, 'w') as f: self.assertEqual(f.flush(mode=f.FLUSH_FRAME), None) self.assertEqual(b.getvalue(), b'') self.assertEqual(f.write(DECOMPRESSED), DSIZE) self.assertEqual(f.tell(), DSIZE) self.assertEqual(f.flush(f.FLUSH_BLOCK), None) self.assertEqual(f.tell(), DSIZE) self.assertEqual(f.seek_table_info, (0, 0, 0)) self.assertEqual(f.flush(mode=f.FLUSH_FRAME), None) self.assertEqual(f.tell(), DSIZE) fp_pos = f._fp.tell() self.assertEqual(f.seek_table_info, (1, fp_pos, DSIZE)) f.write(DECOMPRESSED) self.assertEqual(f.tell(), DSIZE*2) self.assertEqual(f.seek_table_info, (1, fp_pos, DSIZE)) f.flush(f.FLUSH_FRAME) self.assertEqual(f.tell(), DSIZE*2) self.assertEqual(f.seek_table_info, (2, f._fp.tell(), DSIZE*2)) # closed file with self.assertRaisesRegex(ValueError, 'I/O operation'): f.flush() with self.assertRaisesRegex(ValueError, 'I/O operation'): f.flush(f.FLUSH_FRAME) # do nothing in reading mode b.seek(0) with SeekableZstdFile(b, 'r') as f: f.flush() f.flush(f.FLUSH_FRAME) def test_flush_arg(self): b = BytesIO() with SeekableZstdFile(b, 'w') as f: f.flush() f.flush(f.FLUSH_BLOCK) f.flush(f.FLUSH_FRAME) f.flush(mode=f.FLUSH_FRAME) self.assertEqual(ZstdCompressor.CONTINUE, 0) with self.assertRaises(ValueError): f.flush(ZstdCompressor.CONTINUE) with self.assertRaises((TypeError, ValueError)): f.flush(b'123') with self.assertRaises(TypeError): f.flush(b'123', f.FLUSH_BLOCK) with self.assertRaises(TypeError): f.flush(node=f.FLUSH_FRAME) def test_close(self): with BytesIO(self.two_frames) as src: f = SeekableZstdFile(src) f.close() # SeekableSeekableZstdFile.close() should not close the underlying file object. self.assertFalse(src.closed) # Try closing an already-closed SeekableZstdFile. f.close() self.assertFalse(src.closed) # Test with a real file on disk, opened directly by SeekableZstdFile. with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name f = SeekableZstdFile(filename) fp = f._fp f.close() # Here, SeekableZstdFile.close() *should* close the underlying file object. self.assertTrue(fp.closed) # Try closing an already-closed SeekableZstdFile. f.close() os.remove(filename) def test_close_exception(self): class B(BytesIO): def write(self, data): if data: raise OSError f = SeekableZstdFile(B(), 'w') with self.assertRaises(OSError): f.close() self.assertTrue(f.closed) self.assertIsNone(f._seek_table) def test_wrong_max_frame_content_size(self): with self.assertRaises(TypeError): SeekableZstdFile(BytesIO(), 'w', max_frame_content_size=None) with self.assertRaisesRegex(ValueError, 'max_frame_content_size'): SeekableZstdFile(BytesIO(), 'w', max_frame_content_size=0) with self.assertRaisesRegex(ValueError, 'max_frame_content_size'): SeekableZstdFile(BytesIO(), 'w', max_frame_content_size=1*1024*1024*1024+1) def test_write_max_content_size(self): DSIZE = len(DECOMPRESSED) TAIL = b'12345' TAILSIZE = len(TAIL) b = BytesIO() with SeekableZstdFile(b, 'w', max_frame_content_size=4) as f: # 4, 4, (2) self.assertEqual(f.write(DECOMPRESSED), DSIZE) self.assertEqual(f.tell(), DSIZE) fp_pos = f._fp.tell() self.assertEqual(f.seek_table_info, (2, fp_pos, 8)) # 4, 4, (2) self.assertIsNone(f.flush(f.FLUSH_BLOCK)) self.assertEqual(f.tell(), DSIZE) self.assertEqual(f.seek_table_info, (2, fp_pos, 8)) # 4, 4, 2 self.assertIsNone(f.flush(f.FLUSH_FRAME)) self.assertEqual(f.tell(), DSIZE) self.assertEqual(f.seek_table_info, (3, f._fp.tell(), DSIZE)) # 4, 4, 2, 4, (1) self.assertEqual(f.write(TAIL), TAILSIZE) self.assertEqual(f.tell(), DSIZE+TAILSIZE) self.assertEqual(f.seek_table_info, (4, f._fp.tell(), DSIZE+4)) # 4, 4, 2, 4, 1 self.assertIsNone(f.flush(f.FLUSH_FRAME)) self.assertEqual(f.tell(), DSIZE+TAILSIZE) self.assertEqual(f.seek_table_info, (5, f._fp.tell(), DSIZE+TAILSIZE)) # 4, 4, 2, 4, 1, # 4, 4, 4, (3) self.assertEqual(f.write(DECOMPRESSED+TAIL), DSIZE+TAILSIZE) self.assertEqual(f.tell(), (DSIZE+TAILSIZE)*2) self.assertEqual(f.seek_table_info, (8, f._fp.tell(), 27)) frames = [4, 4, 2, 4, 1, 4, 4, 4, 3, 0] self.assertEqual(self.get_decompressed_sizes_list(b.getvalue()), frames) b.seek(0) with SeekableZstdFile(b, 'r') as f: self.assertEqual(f.read(), DECOMPRESSED + TAIL + DECOMPRESSED + TAIL) # 1 is the skip table self.assertEqual(len(f._buffer.raw._seek_table), len(frames)-1) self.assertEqual(f.seek_table_info, (9, len(b.getvalue()) - (17+9*8), (DSIZE+TAILSIZE)*2)) def test_append_mode(self): DSIZE = len(DECOMPRESSED) with tempfile.NamedTemporaryFile(delete=False) as tmp_f: filename = tmp_f.name # two frames seekable format file with io.open(filename, 'wb') as f: f.write(self.two_frames) # append with SeekableZstdFile(filename, 'a') as f: # consistent with ZstdFile, in append mode the initial position # is 0. user can get the correct position via f.seek_table_info. self.assertEqual(f.tell(), 0) self.assertEqual(f.write(DECOMPRESSED), DSIZE) self.assertEqual(f.tell(), DSIZE) self.assertIsNone(f.flush()) self.assertEqual(f.tell(), DSIZE) self.assertEqual(f.write(DECOMPRESSED), DSIZE) self.assertEqual(f.tell(), DSIZE*2) self.assertIsNone(f.flush(f.FLUSH_FRAME)) self.assertEqual(f.tell(), DSIZE*2) # call .close() again self.assertTrue(f.closed) f.close() f.close() # verify with SeekableZstdFile(filename, 'r') as f: self.assertEqual(len(f._buffer.raw._seek_table), 3) self.assertEqual(f.read(), DECOMPRESSED*4) fsize = f.tell() self.assertEqual(fsize, 40) self.assertEqual(f.seek(fsize-7), fsize-7) self.assertEqual(f.read(), DECOMPRESSED[-7:]) self.assertEqual(f.seek(fsize-15), fsize-15) self.assertEqual(f.read(), (DECOMPRESSED*4)[-15:]) # [frame1, frame2, frame3, seek_table] with io.open(filename, 'rb') as f: dat = f.read() lst = self.get_decompressed_sizes_list(dat) self.assertEqual(lst, [10, 10, 20, 0]) self.assertEqual(decompress(dat), DECOMPRESSED*4) os.remove(filename) def test_append_new_file(self): with tempfile.NamedTemporaryFile(delete=True) as tmp_f: filename = tmp_f.name with SeekableZstdFile(filename, 'a'): pass self.assertTrue(os.path.isfile(filename)) os.remove(filename) def test_append_not_seekable(self): # in append mode, and the file is not seekable, the # current seek table frame can't be overwritten. # get a temp file name with tempfile.NamedTemporaryFile(delete=False) as tmp_f: filename = tmp_f.name # mock io.open, return False in append mode. def mock_open(io_open): def get_file(*args, **kwargs): f = io_open(*args, **kwargs) if len(args) > 1 and args[1] == 'ab': def seekable(*args, **kwargs): return False f.seekable = seekable return f return get_file # append 1 with patch("builtins.open", mock_open(io.open)): with self.assertWarnsRegex(RuntimeWarning, (r"at the end of the file " r"can't be overwritten" r".*?\. 0 bytes")): f = SeekableZstdFile(filename, 'a') f.write(DECOMPRESSED) f.flush(f.FLUSH_FRAME) f.write(DECOMPRESSED) f.close() # append 2 with patch("builtins.open", mock_open(io.open)): with self.assertWarnsRegex(RuntimeWarning, (r"at the end of the file " r"can't be overwritten" r".*?\d\d+ bytes")): f = SeekableZstdFile(filename, 'a') f.write(DECOMPRESSED) f.close() # verify content with SeekableZstdFile(filename, 'r') as f: self.assertEqual(f.read(), DECOMPRESSED*3) # [frame1, frame2, seek_table, frame3, seek_table] with io.open(filename, 'rb') as f: dat = f.read() lst = self.get_decompressed_sizes_list(dat) self.assertEqual(lst, [10, 10, 0, 10, 0]) self.assertEqual(decompress(dat), DECOMPRESSED*3) os.remove(filename) def test_append_loading_not_seekable(self): # in append mode, and 'rb' mode file object is not seekable, # the seek table can't be loaded. # get a temp file name with tempfile.NamedTemporaryFile(delete=False) as tmp_f: filename = tmp_f.name # write with SeekableZstdFile(filename, 'w') as f: f.write(DECOMPRESSED) # mock io.open, return False in 'rb' mode. def mock_open(io_open): def get_file(*args, **kwargs): f = io_open(*args, **kwargs) if len(args) > 1 and args[1] == 'rb': def seekable(*args, **kwargs): return False f.seekable = seekable return f return get_file # append with patch("builtins.open", mock_open(io.open)): with self.assertRaisesRegex( TypeError, (r"In SeekableZstdFile's append mode \('a', 'ab'\)," r".*?should be seekable")): SeekableZstdFile(filename, 'a') os.remove(filename) def test_bad_append(self): # can't accept file object with self.assertRaisesRegex(TypeError, "can't accept file object"): SeekableZstdFile(BytesIO(self.two_frames), 'ab') # two frames NOT seekable format file with tempfile.NamedTemporaryFile(delete=False) as tmp_f: filename = tmp_f.name with open(filename, 'wb') as f: f.write(COMPRESSED*2) with self.assertRaisesRegex(SeekableFormatError, 'Format Magic Number'): SeekableZstdFile(filename, 'a') os.remove(filename) def test_x_mode(self): with tempfile.NamedTemporaryFile() as tmp_f: filename = tmp_f.name for mode in ("x", "xb"): with SeekableZstdFile(filename, mode): pass with self.assertRaises(FileExistsError): with SeekableZstdFile(filename, mode): pass os.remove(filename) def test_is_seekable_format_file(self): # file object self.assertEqual( SeekableZstdFile.is_seekable_format_file(BytesIO(b'')), True) self.assertEqual( SeekableZstdFile.is_seekable_format_file(BytesIO(self.two_frames)), True) self.assertEqual( SeekableZstdFile.is_seekable_format_file(BytesIO(COMPRESSED)), False) self.assertEqual( SeekableZstdFile.is_seekable_format_file(BytesIO(COMPRESSED*100)), False) # file path with tempfile.NamedTemporaryFile(delete=False) as tmp_f: filename = tmp_f.name with io.open(filename, 'wb') as f: f.write(self.two_frames) self.assertEqual( SeekableZstdFile.is_seekable_format_file(filename), True) os.remove(filename) # not readable class C: def readable(self): return False def seekable(self): return True obj = C() with self.assertRaisesRegex(TypeError, 'readable'): SeekableZstdFile.is_seekable_format_file(obj) # not seekable class C: def readable(self): return True def seekable(self): return False def read(self, size=-1): return b'' obj = C() with self.assertRaisesRegex(TypeError, 'seekable'): SeekableZstdFile.is_seekable_format_file(obj) # raise exception class C: def readable(self): return True def seekable(self): return True def read(self, size=-1): raise OSError def seek(self, offset, whence=io.SEEK_SET): raise OSError def tell(self): return 1 obj = C() with self.assertRaises(OSError): SeekableZstdFile.is_seekable_format_file(obj) # seek back b = BytesIO(COMPRESSED*3) POS = 5 self.assertEqual(b.seek(POS), POS) self.assertEqual(b.tell(), POS) self.assertEqual(SeekableZstdFile.is_seekable_format_file(b), False) self.assertEqual(b.tell(), POS) def test_skip_large_skippable_frame(self): # generate test file, has a 10 MiB skippable frame CSIZE = len(COMPRESSED) DSIZE = len(DECOMPRESSED) _10MiB = 10*1024*1024 sf = (0x184D2A50).to_bytes(4, byteorder='little') + \ (_10MiB).to_bytes(4, byteorder='little') + \ b'a' * _10MiB t = _SeekTable(read_mode=False) t.append_entry(CSIZE, DSIZE) t.append_entry(len(sf), 0) t.append_entry(CSIZE, DSIZE) content = BytesIO() content.write(COMPRESSED) content.write(sf) content.write(COMPRESSED) t.write_seek_table(content) b = content.getvalue() self.assertGreater(len(b), 2*CSIZE + _10MiB) # read all content.seek(0) with ZstdFile(content, 'r') as f: self.assertEqual(f.read(), DECOMPRESSED*2) with SeekableZstdFile(content, 'r') as f: self.assertEqual(f.read(), DECOMPRESSED*2) class B(BytesIO): def read(self, size=-1): if CSIZE + 1024*1024 < self.tell() < CSIZE + _10MiB: raise Exception('should skip the skippable frame') return super().read(size) # |--data1--|--skippable--|--data2--| # ^P1 ^P2 with SeekableZstdFile(B(b)) as f: t = f._buffer.raw._seek_table # to P1 self.assertEqual(f.read(DSIZE), DECOMPRESSED) self.assertEqual(f.tell(), DSIZE) self.assertEqual(t.index_by_dpos(DSIZE), 3) self.assertLess(f._fp.tell(), 5*1024*1024) # new position # if new_frame == old_frame and offset >= self._pos and \ # c_pos - self._fp.tell() < 1*1024*1024: # pass # else: # do_jump NEW_POS = DSIZE + 3 self.assertEqual(t.index_by_dpos(NEW_POS), 3) self.assertGreaterEqual(NEW_POS, f.tell()) c_pos, d_pos = t.get_frame_sizes(3) self.assertGreaterEqual(c_pos, _10MiB) self.assertEqual(d_pos, DSIZE) self.assertGreaterEqual(c_pos - f._fp.tell(), 1024*1024) # cross the skippable frame self.assertEqual(f.seek(NEW_POS), NEW_POS) self.assertGreater(f._fp.tell(), _10MiB) self.assertEqual(f.read(), DECOMPRESSED[3:]) def run_with_real_data(self, CLS): _100KiB = 100*1024 _1MiB = 1*1024*1024 b = bytes([random.randint(0, 255) for _ in range(128*1024)]) b *= 8 self.assertEqual(len(b), _1MiB) # write, -100000 makes low compression ratio. bo = BytesIO() with SeekableZstdFile(bo, 'w', level_or_option= {CParameter.compressionLevel:-100000, CParameter.checksumFlag:1}, max_frame_content_size=_100KiB) as f: self.assertEqual(f.write(b), len(b)) self.assertEqual(f.tell(), len(b)) # call .close() again self.assertTrue(f.closed) f.close() f.close() SEEKABLE_FILE_SIZE = len(bo.getvalue()) # frames self.assertEqual(self.get_decompressed_sizes_list(bo.getvalue()), [102400, 102400, 102400, 102400, 102400, 102400, 102400, 102400, 102400, 102400, 24576, 0]) # test 1 bo.seek(0) with CLS(bo, 'r') as f: self.assertEqual(f.read(), b) self.assertEqual(f.tell(), _1MiB) self.assertEqual(f._buffer.raw.tell(), _1MiB) # call .close() again self.assertTrue(f.closed) f.close() f.close() # test 2 bo.seek(0) with CLS(bo, 'r') as f: # frames number if CLS is SeekableZstdFile: self.assertEqual(len(f._buffer.raw._seek_table), ceil(_1MiB/_100KiB)) # read 1 OFFSET1 = 23 OFFSET2 = 3 * _100KiB + 1234 self.assertEqual(f.seek(OFFSET1), OFFSET1) self.assertEqual(f.seek(OFFSET2, 1), OFFSET1+OFFSET2) self.assertEqual(f.tell(), OFFSET1+OFFSET2) self.assertEqual(f.read(300), b[OFFSET1+OFFSET2:OFFSET1+OFFSET2+300]) # > EOF self.assertEqual(f.seek(_1MiB+_100KiB), _1MiB) self.assertEqual(f.tell(), _1MiB) self.assertEqual(f.read(), b'') self.assertEqual(f._fp.tell(), SEEKABLE_FILE_SIZE) # read 2 self.assertEqual(f.seek(-123), 0) self.assertEqual(f.tell(), 0) self.assertEqual(f.read(300), b[:300]) # readlines self.assertEqual(f.seek(-_100KiB, 2), _1MiB-_100KiB) self.assertEqual(f.tell(), _1MiB-_100KiB) self.assertEqual(f.readlines(), BytesIO(b[-_100KiB:]).readlines()) # read 3 self.assertEqual(f.seek(123), 123) self.assertEqual(f.tell(), 123) self.assertEqual(f.read(_100KiB*2), b[123:123+_100KiB*2]) # read 4 self.assertEqual(f.seek(0), 0) self.assertEqual(f.tell(), 0) self.assertEqual(f.read(), b) # read all random_offset = random.randint(0, len(b)) self.assertEqual(f.seek(random_offset), random_offset) self.assertEqual(f.tell(), random_offset) self.assertEqual(f.read(), b[random_offset:]) self.assertEqual(f.tell(), _1MiB) self.assertEqual(f._buffer.raw.tell(), _1MiB) # call .close() again self.assertTrue(f.closed) f.close() f.close() def test_real_data(self): self.run_with_real_data(ZstdFile) self.run_with_real_data(SeekableZstdFile) def test_table_info(self): # read mode with SeekableZstdFile(BytesIO(self.two_frames), 'r') as f: self.assertEqual(f.read(), DECOMPRESSED*2) self.assertEqual(f.seek_table_info, (2, len(self.two_frames) - (17+2*8), len(DECOMPRESSED)*2) ) # write mode with SeekableZstdFile(BytesIO(), 'w') as f: f.write(DECOMPRESSED) f.flush(f.FLUSH_FRAME) f.write(DECOMPRESSED) f.flush(f.FLUSH_FRAME) self.assertEqual(f.seek_table_info, (2, f._fp.tell(), len(DECOMPRESSED)*2) ) # append mode with tempfile.NamedTemporaryFile(delete=False) as tmp_f: filename = tmp_f.name with io.open(filename, 'wb') as f: f.write(self.two_frames) with SeekableZstdFile(filename, 'a') as f: f.write(DECOMPRESSED) f.flush(f.FLUSH_FRAME) self.assertEqual(f.seek_table_info, (3, f._fp.tell(), len(DECOMPRESSED)*3) ) os.remove(filename) # closed self.assertIsNone(f.seek_table_info) if __name__ == "__main__": unittest.main() pyzstd-0.19.1/tests/test_zstd.py0000644000000000000000000040420613615410400013624 0ustar00from io import BytesIO, UnsupportedOperation from contextlib import contextmanager import builtins import gc import itertools import io import os import re import sys import array import pathlib import pickle import random import subprocess import tempfile import unittest import warnings from pyzstd import ZstdCompressor, RichMemZstdCompressor, \ ZstdDecompressor, EndlessZstdDecompressor, ZstdError, \ CParameter, DParameter, Strategy, \ compress, compress_stream, richmem_compress, \ decompress, decompress_stream, \ ZstdDict, train_dict, finalize_dict, \ zstd_version, zstd_version_info, zstd_support_multithread, \ compressionLevel_values, get_frame_info, get_frame_size, \ ZstdFile, open DAT_130K_D = None DAT_130K_C = None DECOMPRESSED_DAT = None COMPRESSED_DAT = None DECOMPRESSED_100_PLUS_32KB = None COMPRESSED_100_PLUS_32KB = None SKIPPABLE_FRAME = None THIS_FILE_BYTES = None THIS_FILE_STR = None COMPRESSED_THIS_FILE = None COMPRESSED_BOGUS = None SAMPLES = None TRAINED_DICT = None KB = 1024 MB = 1024*1024 @contextmanager def _check_deprecated(testcase): with warnings.catch_warnings(record=True) as warns: yield testcase.assertEqual(len(warns), 1) warn = warns[0] testcase.assertEqual(warn.category, DeprecationWarning) testcase.assertIn( str(warn.message), [ "pyzstd.ZstdFile()'s read_size parameter is deprecated", "pyzstd.ZstdFile()'s write_size parameter is deprecated", "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.compress_stream", "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.decompress_stream", "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.richmem_compress", "See https://pyzstd.readthedocs.io/en/stable/deprecated.html for alternatives to pyzstd.RichMemZstdCompressor", ] ) def setUpModule(): # uncompressed size 130KB, more than a zstd block. # with a frame epilogue, 4 bytes checksum. global DAT_130K_D DAT_130K_D = bytes([random.randint(0, 127) for _ in range(130*1024)]) global DAT_130K_C DAT_130K_C = compress(DAT_130K_D, {CParameter.checksumFlag:1}) global DECOMPRESSED_DAT DECOMPRESSED_DAT = b'abcdefg123456' * 1000 global COMPRESSED_DAT COMPRESSED_DAT = compress(DECOMPRESSED_DAT) global DECOMPRESSED_100_PLUS_32KB DECOMPRESSED_100_PLUS_32KB = b'a' * (100 + 32*1024) global COMPRESSED_100_PLUS_32KB COMPRESSED_100_PLUS_32KB = compress(DECOMPRESSED_100_PLUS_32KB) global SKIPPABLE_FRAME SKIPPABLE_FRAME = (0x184D2A50).to_bytes(4, byteorder='little') + \ (32*1024).to_bytes(4, byteorder='little') + \ b'a' * (32*1024) global THIS_FILE_BYTES, THIS_FILE_STR with builtins.open(os.path.abspath(__file__), 'rb') as f: THIS_FILE_BYTES = f.read() THIS_FILE_BYTES = re.sub(rb'\r?\n', rb'\n', THIS_FILE_BYTES) THIS_FILE_STR = THIS_FILE_BYTES.decode('utf-8') global COMPRESSED_THIS_FILE COMPRESSED_THIS_FILE = compress(THIS_FILE_BYTES) global COMPRESSED_BOGUS COMPRESSED_BOGUS = DECOMPRESSED_DAT # dict data words = [b'red', b'green', b'yellow', b'black', b'withe', b'blue', b'lilac', b'purple', b'navy', b'glod', b'silver', b'olive', b'dog', b'cat', b'tiger', b'lion', b'fish', b'bird'] lst = [] for i in range(300): sample = [b'%s = %d' % (random.choice(words), random.randrange(100)) for j in range(20)] sample = b'\n'.join(sample) lst.append(sample) global SAMPLES SAMPLES = lst assert len(SAMPLES) > 10 global TRAINED_DICT TRAINED_DICT = train_dict(SAMPLES, 3*1024) assert len(TRAINED_DICT.dict_content) <= 3*1024 class FunctionsTestCase(unittest.TestCase): def test_version(self): s = '.'.join((str(i) for i in zstd_version_info)) self.assertEqual(s, zstd_version) def test_compressionLevel_values(self): self.assertEqual(type(compressionLevel_values.default), int) self.assertEqual(type(compressionLevel_values.min), int) self.assertEqual(type(compressionLevel_values.max), int) self.assertLess(compressionLevel_values.min, compressionLevel_values.max) def test_compress_decompress(self): raw_dat = THIS_FILE_BYTES[:len(THIS_FILE_BYTES)//6] default, minv, maxv = compressionLevel_values for level in range(max(-20, minv), maxv+1): dat1 = compress(raw_dat, level) dat2 = decompress(dat1) self.assertEqual(dat2, raw_dat) def test_get_frame_info(self): # no dict info = get_frame_info(COMPRESSED_100_PLUS_32KB[:20]) self.assertEqual(info.decompressed_size, 32*1024+100) self.assertEqual(info.dictionary_id, 0) # use dict dat = compress(b'a'*345, zstd_dict=TRAINED_DICT) info = get_frame_info(dat) self.assertEqual(info.decompressed_size, 345) self.assertEqual(info.dictionary_id, TRAINED_DICT.dict_id) with self.assertRaisesRegex(ZstdError, 'not less than the frame header'): get_frame_info(b'aaaaaaaaaaaaaa') def test_get_frame_size(self): size = get_frame_size(COMPRESSED_100_PLUS_32KB) self.assertEqual(size, len(COMPRESSED_100_PLUS_32KB)) with self.assertRaisesRegex(ZstdError, 'not less than this complete frame'): get_frame_size(b'aaaaaaaaaaaaaa') class ClassShapeTestCase(unittest.TestCase): def test_ZstdCompressor(self): # class attributes ZstdCompressor.CONTINUE ZstdCompressor.FLUSH_BLOCK ZstdCompressor.FLUSH_FRAME # method & member ZstdCompressor() ZstdCompressor(12, TRAINED_DICT) c = ZstdCompressor(level_or_option=2, zstd_dict=TRAINED_DICT) c.compress(b'123456') c.compress(b'123456', ZstdCompressor.CONTINUE) c.compress(data=b'123456', mode=c.CONTINUE) c.flush() c.flush(ZstdCompressor.FLUSH_BLOCK) c.flush(mode=c.FLUSH_FRAME) c.last_mode # decompressor method & member with self.assertRaises(AttributeError): c.decompress(b'') with self.assertRaises(AttributeError): c.at_frame_edge with self.assertRaises(AttributeError): c.eof with self.assertRaises(AttributeError): c.needs_input # read only attribute with self.assertRaises(AttributeError): c.last_mode = ZstdCompressor.FLUSH_BLOCK # name self.assertIn('.ZstdCompressor', str(type(c))) # doesn't support pickle with self.assertRaises(TypeError): pickle.dumps(c) # supports subclass class SubClass(ZstdCompressor): pass def test_RichMemZstdCompressor(self): # class attributes with self.assertRaises(AttributeError): RichMemZstdCompressor.CONTINUE with self.assertRaises(AttributeError): RichMemZstdCompressor.FLUSH_BLOCK with self.assertRaises(AttributeError): RichMemZstdCompressor.FLUSH_FRAME # method & member with _check_deprecated(self): RichMemZstdCompressor() with _check_deprecated(self): RichMemZstdCompressor(12, TRAINED_DICT) with _check_deprecated(self): c = RichMemZstdCompressor(level_or_option=4, zstd_dict=TRAINED_DICT) c.compress(b'123456') c.compress(data=b'123456') # ZstdCompressor method & member with self.assertRaises(TypeError): c.compress(b'123456', ZstdCompressor.FLUSH_FRAME) with self.assertRaises(AttributeError): c.flush() with self.assertRaises(AttributeError): c.last_mode # decompressor method & member with self.assertRaises(AttributeError): c.decompress(b'') with self.assertRaises(AttributeError): c.at_frame_edge with self.assertRaises(AttributeError): c.eof with self.assertRaises(AttributeError): c.needs_input # name self.assertIn('.RichMemZstdCompressor', str(type(c))) # doesn't support pickle with self.assertRaises(TypeError): pickle.dumps(c) # supports subclass with _check_deprecated(self): class SubClass(RichMemZstdCompressor): pass def test_Decompressor(self): # method & member ZstdDecompressor() ZstdDecompressor(TRAINED_DICT, {}) d = ZstdDecompressor(zstd_dict=TRAINED_DICT, option={}) d.decompress(b'') d.decompress(b'', 100) d.decompress(data=b'', max_length = 100) d.eof d.needs_input d.unused_data # ZstdCompressor attributes with self.assertRaises(AttributeError): d.CONTINUE with self.assertRaises(AttributeError): d.FLUSH_BLOCK with self.assertRaises(AttributeError): d.FLUSH_FRAME with self.assertRaises(AttributeError): d.compress(b'') with self.assertRaises(AttributeError): d.flush() # EndlessZstdDecompressor attribute with self.assertRaises(AttributeError): d.at_frame_edge # read only attributes with self.assertRaises(AttributeError): d.eof = True with self.assertRaises(AttributeError): d.needs_input = True with self.assertRaises(AttributeError): d.unused_data = b'' # name self.assertIn('.ZstdDecompressor', str(type(d))) # doesn't support pickle with self.assertRaises(TypeError): pickle.dumps(d) # supports subclass class SubClass(ZstdDecompressor): pass def test_EndlessDecompressor(self): # method & member EndlessZstdDecompressor(TRAINED_DICT, {}) EndlessZstdDecompressor(zstd_dict=TRAINED_DICT, option={}) d = EndlessZstdDecompressor() d.decompress(b'') d.decompress(b'', 100) d.decompress(data=b'', max_length = 100) d.at_frame_edge d.needs_input # ZstdCompressor attributes with self.assertRaises(AttributeError): EndlessZstdDecompressor.CONTINUE with self.assertRaises(AttributeError): EndlessZstdDecompressor.FLUSH_BLOCK with self.assertRaises(AttributeError): EndlessZstdDecompressor.FLUSH_FRAME with self.assertRaises(AttributeError): d.compress(b'') with self.assertRaises(AttributeError): d.flush() # ZstdDecompressor attributes with self.assertRaises(AttributeError): d.eof with self.assertRaises(AttributeError): d.unused_data # read only attributes with self.assertRaises(AttributeError): d.needs_input = True with self.assertRaises(AttributeError): d.at_frame_edge = True # name self.assertIn('.EndlessZstdDecompressor', str(type(d))) # doesn't support pickle with self.assertRaises(TypeError): pickle.dumps(d) # supports subclass class SubClass(EndlessZstdDecompressor): pass def test_ZstdDict(self): zd = ZstdDict(b'12345678', is_raw=True) self.assertEqual(type(zd.dict_content), bytes) self.assertEqual(zd.dict_id, 0) self.assertEqual(zd.as_digested_dict[1], 0) self.assertEqual(zd.as_undigested_dict[1], 1) self.assertEqual(zd.as_prefix[1], 2) # name self.assertIn('.ZstdDict', str(type(zd))) # doesn't support pickle with self.assertRaisesRegex((TypeError, pickle.PicklingError), 'pickle'): pickle.dumps(zd) with self.assertRaisesRegex((TypeError, pickle.PicklingError), 'pickle'): pickle.dumps(zd.as_prefix) def test_Strategy(self): # class attributes Strategy.fast Strategy.dfast Strategy.greedy Strategy.lazy Strategy.lazy2 Strategy.btlazy2 Strategy.btopt Strategy.btultra Strategy.btultra2 def test_CParameter(self): CParameter.compressionLevel CParameter.windowLog CParameter.hashLog CParameter.chainLog CParameter.searchLog CParameter.minMatch CParameter.targetLength CParameter.strategy CParameter.targetCBlockSize CParameter.enableLongDistanceMatching CParameter.ldmHashLog CParameter.ldmMinMatch CParameter.ldmBucketSizeLog CParameter.ldmHashRateLog CParameter.contentSizeFlag CParameter.checksumFlag CParameter.dictIDFlag CParameter.nbWorkers CParameter.jobSize CParameter.overlapLog t = CParameter.windowLog.bounds() self.assertEqual(len(t), 2) self.assertEqual(type(t[0]), int) self.assertEqual(type(t[1]), int) def test_DParameter(self): DParameter.windowLogMax t = DParameter.windowLogMax.bounds() self.assertEqual(len(t), 2) self.assertEqual(type(t[0]), int) self.assertEqual(type(t[1]), int) def test_zstderror_pickle(self): try: decompress(b'invalid data') except Exception as e: s = pickle.dumps(e) obj = pickle.loads(s) self.assertEqual(type(obj), ZstdError) else: self.assertFalse(True, 'unreachable code path') class CompressorDecompressorTestCase(unittest.TestCase): def test_simple_bad_args(self): # ZstdCompressor self.assertRaises(TypeError, ZstdCompressor, []) self.assertRaises(TypeError, ZstdCompressor, level_or_option=3.14) self.assertRaises(TypeError, ZstdCompressor, level_or_option='abc') self.assertRaises(TypeError, ZstdCompressor, level_or_option=b'abc') self.assertRaises(TypeError, ZstdCompressor, zstd_dict=123) self.assertRaises(TypeError, ZstdCompressor, zstd_dict=b'abcd1234') self.assertRaises(TypeError, ZstdCompressor, zstd_dict={1:2, 3:4}) self.assertRaises(TypeError, ZstdCompressor, rich_mem=True) with self.assertRaises(OverflowError): ZstdCompressor(2**31) with self.assertRaises(OverflowError): ZstdCompressor({2**31 : 100}) with self.assertRaises(ValueError): ZstdCompressor({CParameter.windowLog:100}) with self.assertRaises(ValueError): ZstdCompressor({3333 : 100}) # EndlessZstdDecompressor self.assertRaises(TypeError, EndlessZstdDecompressor, ()) self.assertRaises(TypeError, EndlessZstdDecompressor, zstd_dict=123) self.assertRaises(TypeError, EndlessZstdDecompressor, zstd_dict=b'abc') self.assertRaises(TypeError, EndlessZstdDecompressor, zstd_dict={1:2, 3:4}) self.assertRaises(TypeError, EndlessZstdDecompressor, option=123) self.assertRaises(TypeError, EndlessZstdDecompressor, option='abc') self.assertRaises(TypeError, EndlessZstdDecompressor, option=b'abc') self.assertRaises(TypeError, EndlessZstdDecompressor, rich_mem=True) with self.assertRaises(OverflowError): EndlessZstdDecompressor(option={2**31 : 100}) with self.assertRaises(ValueError): EndlessZstdDecompressor(option={DParameter.windowLogMax:100}) with self.assertRaises(ValueError): EndlessZstdDecompressor(option={3333 : 100}) # Method bad arguments zc = ZstdCompressor() self.assertRaises(TypeError, zc.compress) self.assertRaises((TypeError, ValueError), zc.compress, b"foo", b"bar") self.assertRaises(TypeError, zc.compress, "str") self.assertRaises((TypeError, ValueError), zc.flush, b"foo") self.assertRaises(TypeError, zc.flush, b"blah", 1) self.assertRaises(ValueError, zc.compress, b'', -1) self.assertRaises(ValueError, zc.compress, b'', 3) self.assertRaises(ValueError, zc.flush, zc.CONTINUE) # 0 self.assertRaises(ValueError, zc.flush, 3) zc.compress(b'') zc.compress(b'', zc.CONTINUE) zc.compress(b'', zc.FLUSH_BLOCK) zc.compress(b'', zc.FLUSH_FRAME) empty = zc.flush() zc.flush(zc.FLUSH_BLOCK) zc.flush(zc.FLUSH_FRAME) lzd = EndlessZstdDecompressor() self.assertRaises(TypeError, lzd.decompress) self.assertRaises(TypeError, lzd.decompress, b"foo", b"bar") self.assertRaises(TypeError, lzd.decompress, "str") lzd.decompress(empty) def test_compress_parameters(self): d = {CParameter.compressionLevel : 10, CParameter.windowLog : 12, CParameter.hashLog : 10, CParameter.chainLog : 12, CParameter.searchLog : 12, CParameter.minMatch : 4, CParameter.targetLength : 12, CParameter.strategy : Strategy.lazy, CParameter.enableLongDistanceMatching : 1, CParameter.ldmHashLog : 12, CParameter.ldmMinMatch : 11, CParameter.ldmBucketSizeLog : 5, CParameter.ldmHashRateLog : 12, CParameter.contentSizeFlag : 1, CParameter.checksumFlag : 1, CParameter.dictIDFlag : 0, CParameter.nbWorkers : 2 if zstd_support_multithread else 0, CParameter.jobSize : 5*MB if zstd_support_multithread else 0, CParameter.overlapLog : 9 if zstd_support_multithread else 0, } if zstd_version_info >= (1, 5, 6): d[CParameter.targetCBlockSize] = 150 ZstdCompressor(level_or_option=d) # larger than signed int d1 = d.copy() d1[CParameter.ldmBucketSizeLog] = 2**31 self.assertRaises(OverflowError, ZstdCompressor, d1) # clamp compressionLevel self.assertRaises(ValueError, compress, b'', compressionLevel_values.max+1) self.assertRaises(ValueError, compress, b'', compressionLevel_values.min-1) self.assertRaises(ValueError, compress, b'', {CParameter.compressionLevel:compressionLevel_values.max+1}) self.assertRaises(ValueError, compress, b'', {CParameter.compressionLevel:compressionLevel_values.min-1}) # zstd lib doesn't support MT compression if not zstd_support_multithread: with self.assertRaises(ZstdError): ZstdCompressor({CParameter.nbWorkers:4}) with self.assertRaises(ZstdError): ZstdCompressor({CParameter.jobSize:4}) with self.assertRaises(ZstdError): ZstdCompressor({CParameter.overlapLog:4}) # out of bounds error msg option = {CParameter.windowLog:100} with self.assertRaisesRegex(ValueError, (r"compression parameter 'window_log' " r'received an illegal value 100; the valid range is')): compress(b'', option) def test_decompress_parameters(self): d = {DParameter.windowLogMax : 15} EndlessZstdDecompressor(option=d) # larger than signed int d1 = d.copy() d1[DParameter.windowLogMax] = 2**31 self.assertRaises(OverflowError, EndlessZstdDecompressor, None, d1) # out of bounds error msg option = {DParameter.windowLogMax:100} with self.assertRaisesRegex(ValueError, (r"decompression parameter 'window_log_max' " r'received an illegal value 100; the valid range is')): decompress(b'', option=option) def test_unknown_compression_parameter(self): KEY = 100001234 option = {CParameter.compressionLevel: 10, KEY: 200000000} pattern = r"invalid compression parameter 'unknown parameter \(key %d\)'" \ % KEY with self.assertRaisesRegex(ValueError, pattern): ZstdCompressor(option) def test_unknown_decompression_parameter(self): KEY = 100001234 option = {DParameter.windowLogMax: DParameter.windowLogMax.bounds()[1], KEY: 200000000} pattern = r"invalid decompression parameter 'unknown parameter \(key %d\)'" \ % KEY with self.assertRaisesRegex(ValueError, pattern): ZstdDecompressor(option=option) @unittest.skipIf(not zstd_support_multithread, "zstd build doesn't support multi-threaded compression") def test_zstd_multithread_compress(self): size = 40*1024*1024 b = THIS_FILE_BYTES * (size // len(THIS_FILE_BYTES)) option = {CParameter.compressionLevel : 4, CParameter.nbWorkers : 2} # compress() dat1 = compress(b, option) dat2 = decompress(dat1) self.assertEqual(dat2, b) # richmem_compress() with _check_deprecated(self): dat1 = richmem_compress(b, option) dat2 = decompress(dat1) self.assertEqual(dat2, b) # ZstdCompressor c = ZstdCompressor(option) dat1 = c.compress(b, c.CONTINUE) dat2 = c.compress(b, c.FLUSH_BLOCK) dat3 = c.compress(b, c.FLUSH_FRAME) dat4 = decompress(dat1+dat2+dat3) self.assertEqual(dat4, b * 3) # ZstdFile with ZstdFile(BytesIO(), 'w', level_or_option=option) as f: f.write(b) def test_rich_mem_compress(self): b = THIS_FILE_BYTES[:len(THIS_FILE_BYTES)//3] with _check_deprecated(self): dat1 = richmem_compress(b) dat2 = decompress(dat1) self.assertEqual(dat2, b) @unittest.skipIf(not zstd_support_multithread, "zstd build doesn't support multi-threaded compression") def test_rich_mem_compress_warn(self): b = THIS_FILE_BYTES[:len(THIS_FILE_BYTES)//3] with _check_deprecated(self): dat1 = richmem_compress(b, {CParameter.nbWorkers:2}) dat2 = decompress(dat1) self.assertEqual(dat2, b) def test_set_pledged_input_size(self): DAT = DECOMPRESSED_100_PLUS_32KB CHUNK_SIZE = len(DAT) // 3 # wrong value c = ZstdCompressor() with self.assertRaisesRegex(ValueError, r'positive int less than'): c._set_pledged_input_size(-300) # wrong mode c = ZstdCompressor(1) c.compress(b'123456') self.assertEqual(c.last_mode, c.CONTINUE) with self.assertRaisesRegex(ValueError, r'last_mode == FLUSH_FRAME'): c._set_pledged_input_size(300) # None value c = ZstdCompressor(1) c._set_pledged_input_size(None) dat = c.compress(DAT) + c.flush() ret = get_frame_info(dat) self.assertEqual(ret.decompressed_size, None) # correct value c = ZstdCompressor(1) c._set_pledged_input_size(len(DAT)) chunks = [] posi = 0 while posi < len(DAT): dat = c.compress(DAT[posi:posi+CHUNK_SIZE]) posi += CHUNK_SIZE chunks.append(dat) dat = c.flush() chunks.append(dat) chunks = b''.join(chunks) ret = get_frame_info(chunks) self.assertEqual(ret.decompressed_size, len(DAT)) self.assertEqual(decompress(chunks), DAT) c._set_pledged_input_size(len(DAT)) # the second frame dat = c.compress(DAT) + c.flush() ret = get_frame_info(dat) self.assertEqual(ret.decompressed_size, len(DAT)) self.assertEqual(decompress(dat), DAT) # wrong value c = ZstdCompressor(1) c._set_pledged_input_size(len(DAT)+1) chunks = [] posi = 0 while posi < len(DAT): dat = c.compress(DAT[posi:posi+CHUNK_SIZE]) posi += CHUNK_SIZE chunks.append(dat) with self.assertRaises(ZstdError): c.flush() def test_decompress_1byte(self): d = EndlessZstdDecompressor() dat = d.decompress(COMPRESSED_THIS_FILE, 1) size = len(dat) while True: if d.needs_input: break else: dat = d.decompress(b'', 1) if not dat: break size += len(dat) if size < len(THIS_FILE_BYTES): self.assertFalse(d.at_frame_edge) else: self.assertTrue(d.at_frame_edge) self.assertEqual(size, len(THIS_FILE_BYTES)) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_decompress_2bytes(self): d = EndlessZstdDecompressor() dat = d.decompress(COMPRESSED_THIS_FILE, 2) size = len(dat) while True: if d.needs_input: break else: dat = d.decompress(b'', 2) if not dat: break size += len(dat) if size < len(THIS_FILE_BYTES): self.assertFalse(d.at_frame_edge) else: self.assertTrue(d.at_frame_edge) self.assertEqual(size, len(THIS_FILE_BYTES)) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_decompress_3_1bytes(self): d = EndlessZstdDecompressor() bi = BytesIO(COMPRESSED_THIS_FILE) size = 0 while True: if d.needs_input: in_dat = bi.read(3) if not in_dat: break else: in_dat = b'' dat = d.decompress(in_dat, 1) size += len(dat) if size < len(THIS_FILE_BYTES): self.assertFalse(d.at_frame_edge) else: self.assertTrue(d.at_frame_edge) self.assertEqual(size, len(THIS_FILE_BYTES)) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_decompress_3_2bytes(self): d = EndlessZstdDecompressor() bi = BytesIO(COMPRESSED_THIS_FILE) size = 0 while True: if d.needs_input: in_dat = bi.read(3) if not in_dat: break else: in_dat = b'' dat = d.decompress(in_dat, 2) size += len(dat) if size < len(THIS_FILE_BYTES): self.assertFalse(d.at_frame_edge) else: self.assertTrue(d.at_frame_edge) self.assertEqual(size, len(THIS_FILE_BYTES)) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_decompress_1_3bytes(self): d = EndlessZstdDecompressor() bi = BytesIO(COMPRESSED_THIS_FILE) size = 0 while True: if d.needs_input: in_dat = bi.read(1) if not in_dat: break else: in_dat = b'' dat = d.decompress(in_dat, 3) size += len(dat) if size < len(THIS_FILE_BYTES): self.assertFalse(d.at_frame_edge) else: self.assertTrue(d.at_frame_edge) self.assertEqual(size, len(THIS_FILE_BYTES)) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_decompress_epilogue_flags(self): # DAT_130K_C has a 4 bytes checksum at frame epilogue _130KB = 130 * 1024 # full unlimited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C) self.assertEqual(len(dat), _130KB) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(len(dat), 0) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(len(dat), 0) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # full limited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C, _130KB) self.assertEqual(len(dat), _130KB) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'', 0) self.assertEqual(len(dat), 0) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # [:-4] unlimited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C[:-4]) self.assertEqual(len(dat), _130KB) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(len(dat), 0) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) # [:-4] limited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C[:-4], _130KB) self.assertEqual(len(dat), _130KB) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) dat = d.decompress(b'', 0) self.assertEqual(len(dat), 0) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) # [:-3] unlimited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C[:-3]) self.assertEqual(len(dat), _130KB) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(len(dat), 0) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) # [:-3] limited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C[:-3], _130KB) self.assertEqual(len(dat), _130KB) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) dat = d.decompress(b'', 0) self.assertEqual(len(dat), 0) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) # [:-1] unlimited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C[:-1]) self.assertEqual(len(dat), _130KB) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(len(dat), 0) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) # [:-1] limited d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C[:-1], _130KB) self.assertEqual(len(dat), _130KB) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) dat = d.decompress(b'', 0) self.assertEqual(len(dat), 0) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) def test_decompress_2x130KB(self): decompressed_size = get_frame_info(DAT_130K_C).decompressed_size self.assertEqual(decompressed_size, 130 * 1024) d = EndlessZstdDecompressor() dat = d.decompress(DAT_130K_C + DAT_130K_C) self.assertEqual(len(dat), 2 * 130 * 1024) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_compress_flushblock(self): point = len(THIS_FILE_BYTES) // 2 c = ZstdCompressor() self.assertEqual(c.last_mode, c.FLUSH_FRAME) dat1 = c.compress(THIS_FILE_BYTES[:point]) self.assertEqual(c.last_mode, c.CONTINUE) dat1 += c.compress(THIS_FILE_BYTES[point:], c.FLUSH_BLOCK) self.assertEqual(c.last_mode, c.FLUSH_BLOCK) d = EndlessZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(dat2, THIS_FILE_BYTES) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) def test_compress_flushframe(self): # test compress & decompress point = len(THIS_FILE_BYTES) // 2 c = ZstdCompressor() dat1 = c.compress(THIS_FILE_BYTES[:point]) self.assertEqual(c.last_mode, c.CONTINUE) dat1 += c.compress(THIS_FILE_BYTES[point:], c.FLUSH_FRAME) self.assertEqual(c.last_mode, c.FLUSH_FRAME) nt = get_frame_info(dat1) self.assertEqual(nt.decompressed_size, None) # no content size d = EndlessZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(dat2, THIS_FILE_BYTES) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # single .FLUSH_FRAME mode has content size c = ZstdCompressor() dat = c.compress(THIS_FILE_BYTES, mode=c.FLUSH_FRAME) self.assertEqual(c.last_mode, c.FLUSH_FRAME) nt = get_frame_info(dat) self.assertEqual(nt.decompressed_size, len(THIS_FILE_BYTES)) def test_decompressor_arg(self): zd = ZstdDict(b'12345678', is_raw=True) with self.assertRaises(TypeError): d = ZstdDecompressor(zstd_dict={}) with self.assertRaises(TypeError): d = ZstdDecompressor(option=zd) ZstdDecompressor() ZstdDecompressor(zd, {}) ZstdDecompressor(zstd_dict=zd, option={DParameter.windowLogMax:25}) def test_decompressor_1(self): _130_KB = 130 * 1024 # empty d = ZstdDecompressor() dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.eof) # 130KB full d = ZstdDecompressor() dat = d.decompress(DAT_130K_C) self.assertEqual(len(dat), _130_KB) self.assertTrue(d.eof) self.assertFalse(d.needs_input) # 130KB full, limit output d = ZstdDecompressor() dat = d.decompress(DAT_130K_C, _130_KB) self.assertEqual(len(dat), _130_KB) self.assertTrue(d.eof) self.assertFalse(d.needs_input) # 130KB, without 4 bytes checksum d = ZstdDecompressor() dat = d.decompress(DAT_130K_C[:-4]) self.assertEqual(len(dat), _130_KB) self.assertFalse(d.eof) self.assertTrue(d.needs_input) # above, limit output d = ZstdDecompressor() dat = d.decompress(DAT_130K_C[:-4], _130_KB) self.assertEqual(len(dat), _130_KB) self.assertFalse(d.eof) self.assertFalse(d.needs_input) # full, unused_data TRAIL = b'89234893abcd' d = ZstdDecompressor() dat = d.decompress(DAT_130K_C + TRAIL, _130_KB) self.assertEqual(len(dat), _130_KB) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, TRAIL) def test_decompressor_chunks_read_300(self): _130_KB = 130 * 1024 TRAIL = b'89234893abcd' DAT = DAT_130K_C + TRAIL d = ZstdDecompressor() bi = BytesIO(DAT) lst = [] while True: if d.needs_input: dat = bi.read(300) if not dat: break else: raise Exception('should not get here') ret = d.decompress(dat) lst.append(ret) if d.eof: break ret = b''.join(lst) self.assertEqual(len(ret), _130_KB) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data + bi.read(), TRAIL) def test_decompressor_chunks_read_3(self): _130_KB = 130 * 1024 TRAIL = b'89234893' DAT = DAT_130K_C + TRAIL d = ZstdDecompressor() bi = BytesIO(DAT) lst = [] while True: if d.needs_input: dat = bi.read(3) if not dat: break else: dat = b'' ret = d.decompress(dat, 1) lst.append(ret) if d.eof: break ret = b''.join(lst) self.assertEqual(len(ret), _130_KB) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data + bi.read(), TRAIL) def test_compress_empty(self): # output empty content frame self.assertNotEqual(compress(b''), b'') with _check_deprecated(self): self.assertNotEqual(richmem_compress(b''), b'') c = ZstdCompressor() self.assertNotEqual(c.compress(b'', c.FLUSH_FRAME), b'') with _check_deprecated(self): c = RichMemZstdCompressor() self.assertNotEqual(c.compress(b''), b'') # output b'' bi = BytesIO(b'') bo = BytesIO() with _check_deprecated(self): ret = compress_stream(bi, bo) self.assertEqual(ret, (0, 0)) self.assertEqual(bo.getvalue(), b'') bi.close() bo.close() def test_decompress_empty(self): with self.assertRaises(ZstdError): decompress(b'') d = ZstdDecompressor() self.assertEqual(d.decompress(b''), b'') self.assertFalse(d.eof) d = EndlessZstdDecompressor() self.assertEqual(d.decompress(b''), b'') self.assertTrue(d.at_frame_edge) bi = BytesIO(b'') bo = BytesIO() with _check_deprecated(self): ret = decompress_stream(bi, bo) self.assertEqual(ret, (0, 0)) self.assertEqual(bo.getvalue(), b'') bi.close() bo.close() def test_decompress_empty_content_frame(self): DAT = compress(b'') # decompress self.assertGreaterEqual(len(DAT), 4) self.assertEqual(decompress(DAT), b'') with self.assertRaises(ZstdError): decompress(DAT[:-1]) # ZstdDecompressor d = ZstdDecompressor() dat = d.decompress(DAT) self.assertEqual(dat, b'') self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice d = ZstdDecompressor() dat = d.decompress(DAT[:-1]) self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice # EndlessZstdDecompressor d = EndlessZstdDecompressor() dat = d.decompress(DAT) self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) d = EndlessZstdDecompressor() dat = d.decompress(DAT[:-1]) self.assertEqual(dat, b'') self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) class DecompressorFlagsTestCase(unittest.TestCase): @classmethod def setUpClass(cls): option = {CParameter.checksumFlag:1} c = ZstdCompressor(option) cls.DECOMPRESSED_42 = b'a'*42 cls.FRAME_42 = c.compress(cls.DECOMPRESSED_42, c.FLUSH_FRAME) cls.DECOMPRESSED_60 = b'a'*60 cls.FRAME_60 = c.compress(cls.DECOMPRESSED_60, c.FLUSH_FRAME) cls.FRAME_42_60 = cls.FRAME_42 + cls.FRAME_60 cls.DECOMPRESSED_42_60 = cls.DECOMPRESSED_42 + cls.DECOMPRESSED_60 cls._130KB = 130*1024 c = ZstdCompressor() cls.UNKNOWN_FRAME_42 = c.compress(cls.DECOMPRESSED_42) + c.flush() cls.UNKNOWN_FRAME_60 = c.compress(cls.DECOMPRESSED_60) + c.flush() cls.UNKNOWN_FRAME_42_60 = cls.UNKNOWN_FRAME_42 + cls.UNKNOWN_FRAME_60 cls.TRAIL = b'12345678abcdefg!@#$%^&*()_+|' def test_function_decompress(self): with self.assertRaises(ZstdError): decompress(b'') self.assertEqual(len(decompress(COMPRESSED_100_PLUS_32KB)), 100+32*1024) # 1 frame self.assertEqual(decompress(self.FRAME_42), self.DECOMPRESSED_42) self.assertEqual(decompress(self.UNKNOWN_FRAME_42), self.DECOMPRESSED_42) with self.assertRaisesRegex(ZstdError, "Compressed data ended before the end-of-stream marker was reached"): decompress(self.FRAME_42[:1]) with self.assertRaisesRegex(ZstdError, "Compressed data ended before the end-of-stream marker was reached"): decompress(self.FRAME_42[:-4]) with self.assertRaisesRegex(ZstdError, "Compressed data ended before the end-of-stream marker was reached"): decompress(self.FRAME_42[:-1]) # 2 frames self.assertEqual(decompress(self.FRAME_42_60), self.DECOMPRESSED_42_60) self.assertEqual(decompress(self.UNKNOWN_FRAME_42_60), self.DECOMPRESSED_42_60) self.assertEqual(decompress(self.FRAME_42 + self.UNKNOWN_FRAME_60), self.DECOMPRESSED_42_60) self.assertEqual(decompress(self.UNKNOWN_FRAME_42 + self.FRAME_60), self.DECOMPRESSED_42_60) with self.assertRaisesRegex(ZstdError, "Compressed data ended before the end-of-stream marker was reached"): decompress(self.FRAME_42_60[:-4]) with self.assertRaisesRegex(ZstdError, "Compressed data ended before the end-of-stream marker was reached"): decompress(self.UNKNOWN_FRAME_42_60[:-1]) # 130KB self.assertEqual(decompress(DAT_130K_C), DAT_130K_D) with self.assertRaisesRegex(ZstdError, "Compressed data ended before the end-of-stream marker was reached"): decompress(DAT_130K_C[:-4]) with self.assertRaisesRegex(ZstdError, "Compressed data ended before the end-of-stream marker was reached"): decompress(DAT_130K_C[:-1]) # Unknown frame descriptor with self.assertRaisesRegex(ZstdError, "Unknown frame descriptor"): decompress(b'aaaaaaaaa') with self.assertRaisesRegex(ZstdError, "Unknown frame descriptor"): decompress(self.FRAME_42 + b'aaaaaaaaa') with self.assertRaisesRegex(ZstdError, "Unknown frame descriptor"): decompress(self.UNKNOWN_FRAME_42_60 + b'aaaaaaaaa') # doesn't match checksum checksum = DAT_130K_C[-4:] if checksum[0] == 255: wrong_checksum = bytes([254]) + checksum[1:] else: wrong_checksum = bytes([checksum[0]+1]) + checksum[1:] dat = DAT_130K_C[:-4] + wrong_checksum with self.assertRaisesRegex(ZstdError, "doesn't match checksum"): decompress(dat) def test_function_skippable(self): self.assertEqual(decompress(SKIPPABLE_FRAME), b'') self.assertEqual(decompress(SKIPPABLE_FRAME + SKIPPABLE_FRAME), b'') # 1 frame + 2 skippable self.assertEqual(len(decompress(SKIPPABLE_FRAME + SKIPPABLE_FRAME + DAT_130K_C)), self._130KB) self.assertEqual(len(decompress(DAT_130K_C + SKIPPABLE_FRAME + SKIPPABLE_FRAME)), self._130KB) self.assertEqual(len(decompress(SKIPPABLE_FRAME + DAT_130K_C + SKIPPABLE_FRAME)), self._130KB) # unknown size self.assertEqual(decompress(SKIPPABLE_FRAME + self.UNKNOWN_FRAME_60), self.DECOMPRESSED_60) self.assertEqual(decompress(self.UNKNOWN_FRAME_60 + SKIPPABLE_FRAME), self.DECOMPRESSED_60) # 2 frames + 1 skippable self.assertEqual(decompress(self.FRAME_42 + SKIPPABLE_FRAME + self.FRAME_60), self.DECOMPRESSED_42_60) self.assertEqual(decompress(SKIPPABLE_FRAME + self.FRAME_42_60), self.DECOMPRESSED_42_60) self.assertEqual(decompress(self.UNKNOWN_FRAME_42_60 + SKIPPABLE_FRAME), self.DECOMPRESSED_42_60) # incomplete with self.assertRaises(ZstdError): decompress(SKIPPABLE_FRAME[:1]) with self.assertRaises(ZstdError): decompress(SKIPPABLE_FRAME[:-1]) with self.assertRaises(ZstdError): decompress(SKIPPABLE_FRAME[:-1] + self.FRAME_60) with self.assertRaises(ZstdError): decompress(self.FRAME_42 + SKIPPABLE_FRAME[:-1]) # Unknown frame descriptor with self.assertRaisesRegex(ZstdError, "Unknown frame descriptor"): decompress(b'aaaaaaaaa' + SKIPPABLE_FRAME) with self.assertRaisesRegex(ZstdError, "Unknown frame descriptor"): decompress(SKIPPABLE_FRAME + b'aaaaaaaaa') with self.assertRaisesRegex(ZstdError, "Unknown frame descriptor"): decompress(SKIPPABLE_FRAME + SKIPPABLE_FRAME + b'aaaaaaaaa') def test_decompressor_1(self): # empty 1 d = ZstdDecompressor() dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice dat = d.decompress(b'', 0) self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice dat = d.decompress(COMPRESSED_100_PLUS_32KB + b'a') self.assertEqual(dat, DECOMPRESSED_100_PLUS_32KB) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'a') self.assertEqual(d.unused_data, b'a') # twice # empty 2 d = ZstdDecompressor() dat = d.decompress(b'', 0) self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice dat = d.decompress(COMPRESSED_100_PLUS_32KB + b'a') self.assertEqual(dat, DECOMPRESSED_100_PLUS_32KB) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'a') self.assertEqual(d.unused_data, b'a') # twice # 1 frame d = ZstdDecompressor() dat = d.decompress(self.FRAME_42) self.assertEqual(dat, self.DECOMPRESSED_42) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice with self.assertRaises(EOFError): d.decompress(b'') # 1 frame, trail d = ZstdDecompressor() dat = d.decompress(self.FRAME_42 + self.TRAIL) self.assertEqual(dat, self.DECOMPRESSED_42) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, self.TRAIL) self.assertEqual(d.unused_data, self.TRAIL) # twice # 1 frame, 32KB temp = compress(b'a'*(32*1024)) d = ZstdDecompressor() dat = d.decompress(temp, 32*1024) self.assertEqual(dat, b'a'*(32*1024)) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice with self.assertRaises(EOFError): d.decompress(b'') # 1 frame, 32KB+100, trail d = ZstdDecompressor() dat = d.decompress(COMPRESSED_100_PLUS_32KB+self.TRAIL, 100) # 100 bytes self.assertEqual(len(dat), 100) self.assertFalse(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') dat = d.decompress(b'') # 32KB self.assertEqual(len(dat), 32*1024) self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, self.TRAIL) self.assertEqual(d.unused_data, self.TRAIL) # twice with self.assertRaises(EOFError): d.decompress(b'') # incomplete 1 d = ZstdDecompressor() dat = d.decompress(self.FRAME_60[:1]) self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice # incomplete 2 d = ZstdDecompressor() dat = d.decompress(self.FRAME_60[:-4]) self.assertEqual(dat, self.DECOMPRESSED_60) self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice # incomplete 3 d = ZstdDecompressor() dat = d.decompress(self.FRAME_60[:-1]) self.assertEqual(dat, self.DECOMPRESSED_60) self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') # incomplete 4 d = ZstdDecompressor() dat = d.decompress(self.FRAME_60[:-4], 60) self.assertEqual(dat, self.DECOMPRESSED_60) self.assertFalse(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice # Unknown frame descriptor d = ZstdDecompressor() with self.assertRaisesRegex(ZstdError, "Unknown frame descriptor"): d.decompress(b'aaaaaaaaa') def test_decompressor_skippable(self): # 1 skippable d = ZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME) self.assertEqual(dat, b'') self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice # 1 skippable, max_length=0 d = ZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME, 0) self.assertEqual(dat, b'') self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice # 1 skippable, trail d = ZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME + self.TRAIL) self.assertEqual(dat, b'') self.assertTrue(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, self.TRAIL) self.assertEqual(d.unused_data, self.TRAIL) # twice # incomplete d = ZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME[:-1]) self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice # incomplete d = ZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME[:-1], 0) self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertFalse(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.eof) self.assertTrue(d.needs_input) self.assertEqual(d.unused_data, b'') self.assertEqual(d.unused_data, b'') # twice def test_endless_1(self): # empty d = EndlessZstdDecompressor() dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'', 0) self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # 1 frame, a d = EndlessZstdDecompressor() dat = d.decompress(self.FRAME_42) self.assertEqual(dat, self.DECOMPRESSED_42) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(self.FRAME_60, 60) self.assertEqual(dat, self.DECOMPRESSED_60) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # 1 frame, b d = EndlessZstdDecompressor() dat = d.decompress(self.FRAME_42, 21) self.assertNotEqual(dat, self.DECOMPRESSED_42) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) dat += d.decompress(self.FRAME_60, 21) self.assertEqual(dat, self.DECOMPRESSED_42) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) dat = d.decompress(b'', 60) self.assertEqual(dat, self.DECOMPRESSED_60) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # 1 frame, trail d = EndlessZstdDecompressor() dat = None with self.assertRaises(ZstdError): d.decompress(self.FRAME_42 + self.TRAIL) self.assertTrue(d.at_frame_edge) # has been reset self.assertTrue(d.needs_input) # has been reset # 2 frames, a d = EndlessZstdDecompressor() dat = d.decompress(self.FRAME_42_60) self.assertEqual(dat, self.DECOMPRESSED_42+self.DECOMPRESSED_60) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # 2 frame2, b d = EndlessZstdDecompressor() dat = d.decompress(self.FRAME_42_60, 42) self.assertEqual(dat, self.DECOMPRESSED_42) self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) dat = d.decompress(b'') self.assertEqual(dat, self.DECOMPRESSED_60) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # incomplete d = EndlessZstdDecompressor() dat = d.decompress(self.FRAME_42_60[:-2]) self.assertEqual(dat, self.DECOMPRESSED_42 + self.DECOMPRESSED_60) self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) def test_endlessdecompressor_skippable(self): # 1 skippable d = EndlessZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME) self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # 1 skippable, max_length=0 d = EndlessZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME, 0) self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # 1 skippable, trail d = EndlessZstdDecompressor() with self.assertRaises(ZstdError): d.decompress(SKIPPABLE_FRAME + self.TRAIL) self.assertEqual(dat, b'') self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # incomplete d = EndlessZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME[:-1], 0) self.assertEqual(dat, b'') self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) # incomplete d = EndlessZstdDecompressor() dat = d.decompress(SKIPPABLE_FRAME + SKIPPABLE_FRAME[:-1]) self.assertEqual(dat, b'') self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) dat = d.decompress(b'') self.assertEqual(dat, b'') self.assertFalse(d.at_frame_edge) self.assertTrue(d.needs_input) def test_EndlessZstdDecompressor_PEP489(self): class D(EndlessZstdDecompressor): def decompress(self, data): return super().decompress(data) d = D() self.assertEqual(d.decompress(self.FRAME_42_60), self.DECOMPRESSED_42_60) self.assertEqual(d.decompress(b''), b'') self.assertTrue(d.at_frame_edge) with self.assertRaises(ZstdError): d.decompress(b'123456789') class ZstdDictTestCase(unittest.TestCase): def test_is_raw(self): # content < 8 b = b'1234567' with self.assertRaises(ValueError): ZstdDict(b) # content == 8 b = b'12345678' zd = ZstdDict(b, is_raw=True) self.assertEqual(zd.dict_id, 0) temp = compress(b'aaa12345678', 3, zd) self.assertEqual(b'aaa12345678', decompress(temp, zd)) # is_raw == False b = b'12345678abcd' with self.assertRaises(ValueError): ZstdDict(b) # read only attributes with self.assertRaises(AttributeError): zd.dict_content = b with self.assertRaises(AttributeError): zd.dict_id = 10000 # ZstdDict arguments zd = ZstdDict(TRAINED_DICT.dict_content, is_raw=False) self.assertNotEqual(zd.dict_id, 0) zd = ZstdDict(TRAINED_DICT.dict_content, is_raw=True) self.assertNotEqual(zd.dict_id, 0) # note this assertion with self.assertRaises(TypeError): ZstdDict("12345678abcdef", is_raw=True) with self.assertRaises(TypeError): ZstdDict(TRAINED_DICT) # invalid parameter with self.assertRaises(TypeError): ZstdDict(desk333=345) def test_invalid_dict(self): DICT_MAGIC = 0xEC30A437.to_bytes(4, byteorder='little') dict_content = DICT_MAGIC + b'abcdefghighlmnopqrstuvwxyz' # corrupted zd = ZstdDict(dict_content, is_raw=False) with self.assertRaisesRegex(ZstdError, r'Failed to create a ZSTD_CDict instance'): ZstdCompressor(zstd_dict=zd.as_digested_dict) with self.assertRaisesRegex(ZstdError, r'Failed to create a ZSTD_DDict instance'): ZstdDecompressor(zd) # wrong type with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdCompressor(zstd_dict=(zd, b'123')) with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdCompressor(zstd_dict=(zd, 1, 2)) with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdCompressor(zstd_dict=(zd, -1)) with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdCompressor(zstd_dict=(zd, 3)) with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdDecompressor(zstd_dict=(zd, b'123')) with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdDecompressor((zd, 1, 2)) with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdDecompressor((zd, -1)) with self.assertRaisesRegex(TypeError, r'should be a ZstdDict object'): ZstdDecompressor((zd, 3)) def test_train_dict(self): DICT_SIZE1 = 3*1024 global TRAINED_DICT TRAINED_DICT = train_dict(SAMPLES, DICT_SIZE1) ZstdDict(TRAINED_DICT.dict_content, is_raw=False) self.assertNotEqual(TRAINED_DICT.dict_id, 0) self.assertGreater(len(TRAINED_DICT.dict_content), 0) self.assertLessEqual(len(TRAINED_DICT.dict_content), DICT_SIZE1) self.assertTrue(re.match(r'^$', str(TRAINED_DICT))) # compress/decompress c = ZstdCompressor(zstd_dict=TRAINED_DICT) for sample in SAMPLES: dat1 = compress(sample, zstd_dict=TRAINED_DICT) dat2 = decompress(dat1, TRAINED_DICT) self.assertEqual(sample, dat2) dat1 = c.compress(sample) dat1 += c.flush() dat2 = decompress(dat1, TRAINED_DICT) self.assertEqual(sample, dat2) def test_finalize_dict(self): DICT_SIZE2 = 200*1024 C_LEVEL = 6 dic2 = finalize_dict(TRAINED_DICT, SAMPLES, DICT_SIZE2, C_LEVEL) self.assertNotEqual(dic2.dict_id, 0) self.assertGreater(len(dic2.dict_content), 0) self.assertLessEqual(len(dic2.dict_content), DICT_SIZE2) # compress/decompress c = ZstdCompressor(C_LEVEL, dic2) for sample in SAMPLES: dat1 = compress(sample, C_LEVEL, dic2) dat2 = decompress(dat1, dic2) self.assertEqual(sample, dat2) dat1 = c.compress(sample) dat1 += c.flush() dat2 = decompress(dat1, dic2) self.assertEqual(sample, dat2) # dict mismatch self.assertNotEqual(TRAINED_DICT.dict_id, dic2.dict_id) dat1 = compress(SAMPLES[0], zstd_dict=TRAINED_DICT) with self.assertRaises(ZstdError): decompress(dat1, dic2) def test_train_dict_arguments(self): with self.assertRaises(ValueError): train_dict([], 100*KB) with self.assertRaises(ValueError): train_dict(SAMPLES, -100) with self.assertRaises(ValueError): train_dict(SAMPLES, 0) def test_finalize_dict_arguments(self): finalize_dict(TRAINED_DICT, SAMPLES, 1*MB, 2) with self.assertRaises(ValueError): finalize_dict(TRAINED_DICT, [], 100*KB, 2) with self.assertRaises(ValueError): finalize_dict(TRAINED_DICT, SAMPLES, -100, 2) with self.assertRaises(ValueError): finalize_dict(TRAINED_DICT, SAMPLES, 0, 2) def test_as_prefix(self): # V1 V1 = THIS_FILE_BYTES zd = ZstdDict(V1, is_raw=True) # V2 mid = len(V1) // 2 V2 = V1[:mid] + \ (b'a' if V1[mid] != b'a' else b'b') + \ V1[mid+1:] # compress with _check_deprecated(self): dat = richmem_compress(V2, zstd_dict=zd.as_prefix) self.assertEqual(get_frame_info(dat).dictionary_id, 0) # decompress self.assertEqual(decompress(dat, zd.as_prefix), V2) # use wrong prefix zd2 = ZstdDict(SAMPLES[0], is_raw=True) try: decompressed = decompress(dat, zd2.as_prefix) except ZstdError: # expected pass else: self.assertNotEqual(decompressed, V2) # read only attribute with self.assertRaises(AttributeError): zd.as_prefix = b'1234' def test_as_digested_dict(self): zd = TRAINED_DICT # test .as_digested_dict with _check_deprecated(self): dat = richmem_compress(SAMPLES[0], zstd_dict=zd.as_digested_dict) self.assertEqual(decompress(dat, zd.as_digested_dict), SAMPLES[0]) with self.assertRaises(AttributeError): zd.as_digested_dict = b'1234' # test .as_undigested_dict with _check_deprecated(self): dat = richmem_compress(SAMPLES[0], zstd_dict=zd.as_undigested_dict) self.assertEqual(decompress(dat, zd.as_undigested_dict), SAMPLES[0]) with self.assertRaises(AttributeError): zd.as_undigested_dict = b'1234' def test_advanced_compression_parameters(self): option = {CParameter.compressionLevel: 6, CParameter.windowLog: 20, CParameter.enableLongDistanceMatching: 1} # automatically select with _check_deprecated(self): dat = richmem_compress(SAMPLES[0], option, TRAINED_DICT) self.assertEqual(decompress(dat, TRAINED_DICT), SAMPLES[0]) # explicitly select with _check_deprecated(self): dat = richmem_compress(SAMPLES[0], option, TRAINED_DICT.as_digested_dict) self.assertEqual(decompress(dat, TRAINED_DICT), SAMPLES[0]) def test_len(self): self.assertEqual(len(TRAINED_DICT), len(TRAINED_DICT.dict_content)) self.assertIn(str(len(TRAINED_DICT)), str(TRAINED_DICT)) class OutputBufferTestCase(unittest.TestCase): @classmethod def setUpClass(cls): KB = 1024 MB = 1024 * 1024 # should be same as the definition in _zstdmodule.c cls.BLOCK_SIZE = \ [ 32*KB, 64*KB, 256*KB, 1*MB, 4*MB, 8*MB, 16*MB, 16*MB, 32*MB, 32*MB, 32*MB, 32*MB, 64*MB, 64*MB, 128*MB, 128*MB, 256*MB ] # accumulated size cls.ACCUMULATED_SIZE = list(itertools.accumulate(cls.BLOCK_SIZE)) cls.TEST_RANGE = 5 cls.NO_SIZE_OPTION = {CParameter.compressionLevel: compressionLevel_values.min, CParameter.contentSizeFlag: 0} def compress_unknown_size(self, size): return compress(b'a' * size, self.NO_SIZE_OPTION) def test_empty_input(self): dat1 = b'' # decompress() function with self.assertRaises(ZstdError): decompress(dat1) # ZstdDecompressor class d = ZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(len(dat2), 0) self.assertFalse(d.eof) self.assertTrue(d.needs_input) # EndlessZstdDecompressor class d = EndlessZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(len(dat2), 0) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_zero_size_output(self): dat1 = self.compress_unknown_size(0) # decompress() function dat2 = decompress(dat1) self.assertEqual(len(dat2), 0) # ZstdDecompressor class d = ZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(len(dat2), 0) self.assertTrue(d.eof) self.assertFalse(d.needs_input) # EndlessZstdDecompressor class d = EndlessZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(len(dat2), 0) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_edge_sizes(self): for index in range(self.TEST_RANGE): for extra in [-1, 0, 1]: SIZE = self.ACCUMULATED_SIZE[index] + extra dat1 = self.compress_unknown_size(SIZE) # decompress() function dat2 = decompress(dat1) self.assertEqual(len(dat2), SIZE) # ZstdDecompressor class d = ZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(len(dat2), SIZE) self.assertTrue(d.eof) self.assertFalse(d.needs_input) # EndlessZstdDecompressor class d = EndlessZstdDecompressor() dat2 = d.decompress(dat1) self.assertEqual(len(dat2), SIZE) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_edge_sizes_stream(self): SIZE = self.ACCUMULATED_SIZE[self.TEST_RANGE] dat1 = self.compress_unknown_size(SIZE) # ZstdDecompressor class d = ZstdDecompressor() d.decompress(dat1, 0) for index in range(self.TEST_RANGE+1): B_SIZE = self.BLOCK_SIZE[index] dat2 = d.decompress(b'', B_SIZE) self.assertEqual(len(dat2), B_SIZE) self.assertFalse(d.needs_input) if index < self.TEST_RANGE: self.assertFalse(d.eof) else: self.assertTrue(d.eof) # EndlessZstdDecompressor class d = EndlessZstdDecompressor() d.decompress(dat1, 0) for index in range(self.TEST_RANGE+1): B_SIZE = self.BLOCK_SIZE[index] dat2 = d.decompress(b'', B_SIZE) self.assertEqual(len(dat2), B_SIZE) if index < self.TEST_RANGE: self.assertFalse(d.at_frame_edge) self.assertFalse(d.needs_input) else: self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_endlessdecompressor_2_frames(self): self.assertGreater(self.TEST_RANGE - 2, 0) for extra in [-1, 0, 1]: # frame 1 size SIZE1 = self.ACCUMULATED_SIZE[self.TEST_RANGE - 2] + extra # frame 2 size SIZE2 = self.ACCUMULATED_SIZE[self.TEST_RANGE] - SIZE1 FRAME1 = self.compress_unknown_size(SIZE1) FRAME2 = self.compress_unknown_size(SIZE2) # one step d = EndlessZstdDecompressor() dat2 = d.decompress(FRAME1 + FRAME2) self.assertEqual(len(dat2), SIZE1 + SIZE2) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) # two step d = EndlessZstdDecompressor() # frame 1 dat2 = d.decompress(FRAME1 + FRAME2, SIZE1) self.assertEqual(len(dat2), SIZE1) self.assertFalse(d.at_frame_edge) # input stream not at a frame edge self.assertFalse(d.needs_input) # frame 2 dat2 = d.decompress(b'') self.assertEqual(len(dat2), SIZE2) self.assertTrue(d.at_frame_edge) self.assertTrue(d.needs_input) def test_known_size(self): # only decompress() function supports first frame with known size # 1 frame, the decompressed size is known SIZE1 = 123456 known_size = compress(b'a' * SIZE1) dat = decompress(known_size) self.assertEqual(len(dat), SIZE1) # 2 frame, the second frame's decompressed size is unknown for extra in [-1, 0, 1]: SIZE2 = self.BLOCK_SIZE[1] + self.BLOCK_SIZE[2] + extra unknown_size = self.compress_unknown_size(SIZE2) dat = decompress(known_size + unknown_size) self.assertEqual(len(dat), SIZE1 + SIZE2) # def test_large_output(self): # SIZE = self.ACCUMULATED_SIZE[-1] + self.BLOCK_SIZE[-1] + 100_000 # dat1 = self.compress_unknown_size(SIZE) # try: # dat2 = decompress(dat1) # except MemoryError: # return # leng_dat2 = len(dat2) # del dat2 # self.assertEqual(leng_dat2, SIZE) def test_endless_maxlength(self): DECOMPRESSED_SIZE = 100*KB dat1 = compress(b'a' * DECOMPRESSED_SIZE, -3) # -1 d = EndlessZstdDecompressor() dat2 = d.decompress(dat1, -1) self.assertEqual(len(dat2), DECOMPRESSED_SIZE) self.assertTrue(d.needs_input) self.assertTrue(d.at_frame_edge) # DECOMPRESSED_SIZE d = EndlessZstdDecompressor() dat2 = d.decompress(dat1, DECOMPRESSED_SIZE) self.assertEqual(len(dat2), DECOMPRESSED_SIZE) self.assertTrue(d.needs_input) self.assertTrue(d.at_frame_edge) # DECOMPRESSED_SIZE + 1 d = EndlessZstdDecompressor() dat2 = d.decompress(dat1, DECOMPRESSED_SIZE+1) self.assertEqual(len(dat2), DECOMPRESSED_SIZE) self.assertTrue(d.needs_input) self.assertTrue(d.at_frame_edge) # DECOMPRESSED_SIZE - 1 d = EndlessZstdDecompressor() dat2 = d.decompress(dat1, DECOMPRESSED_SIZE-1) self.assertEqual(len(dat2), DECOMPRESSED_SIZE-1) self.assertFalse(d.needs_input) self.assertFalse(d.at_frame_edge) dat2 = d.decompress(b'') self.assertEqual(len(dat2), 1) self.assertTrue(d.needs_input) self.assertTrue(d.at_frame_edge) class FileTestCase(unittest.TestCase): def setUp(self): self.DECOMPRESSED_42 = b'a'*42 self.FRAME_42 = compress(self.DECOMPRESSED_42) def test_init(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: pass with ZstdFile(BytesIO(), "w") as f: pass with ZstdFile(BytesIO(), "x") as f: pass with ZstdFile(BytesIO(), "a") as f: pass with ZstdFile(BytesIO(), "w", level_or_option=12) as f: pass with ZstdFile(BytesIO(), "w", level_or_option={CParameter.checksumFlag:1}) as f: pass with ZstdFile(BytesIO(), "w", level_or_option={}) as f: pass with ZstdFile(BytesIO(), "w", level_or_option=20, zstd_dict=TRAINED_DICT) as f: pass with ZstdFile(BytesIO(), "r", level_or_option={DParameter.windowLogMax:25}) as f: pass with ZstdFile(BytesIO(), "r", level_or_option={}, zstd_dict=TRAINED_DICT) as f: pass def test_init_with_PathLike_filename(self): with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name with ZstdFile(filename, "a") as f: f.write(DECOMPRESSED_100_PLUS_32KB) with ZstdFile(filename) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) with ZstdFile(filename, "a") as f: f.write(DECOMPRESSED_100_PLUS_32KB) with ZstdFile(filename) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB * 2) os.remove(filename) def test_init_with_filename(self): with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name with ZstdFile(filename) as f: pass with ZstdFile(filename, "w") as f: pass with ZstdFile(filename, "a") as f: pass os.remove(filename) def test_init_mode(self): bi = BytesIO() with ZstdFile(bi, "r"): pass with ZstdFile(bi, "rb"): pass with ZstdFile(bi, "w"): pass with ZstdFile(bi, "wb"): pass with ZstdFile(bi, "a"): pass with ZstdFile(bi, "ab"): pass def test_init_with_x_mode(self): with tempfile.NamedTemporaryFile() as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name for mode in ("x", "xb"): with ZstdFile(filename, mode): pass with self.assertRaises(FileExistsError): with ZstdFile(filename, mode): pass os.remove(filename) def test_init_bad_mode(self): with self.assertRaises(TypeError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), (3, "x")) with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "xt") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "x+") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "rx") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "wx") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "rt") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "r+") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "wt") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "w+") with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "rw") with self.assertRaisesRegex(TypeError, r"NOT be CParameter"): ZstdFile(BytesIO(), 'rb', level_or_option={CParameter.compressionLevel:5}) with self.assertRaisesRegex(TypeError, r"NOT be DParameter"): ZstdFile(BytesIO(), 'wb', level_or_option={DParameter.windowLogMax:21}) with self.assertRaises(TypeError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "r", level_or_option=12) def test_init_bad_check(self): with self.assertRaises(TypeError): ZstdFile(BytesIO(), "w", level_or_option='asd') # CHECK_UNKNOWN and anything above CHECK_ID_MAX should be invalid. with self.assertRaises(ValueError): ZstdFile(BytesIO(), "w", level_or_option={999:9999}) with self.assertRaises(ValueError): ZstdFile(BytesIO(), "w", level_or_option={CParameter.windowLog:99}) with self.assertRaises(TypeError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "r", level_or_option=33) with self.assertRaises(OverflowError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), level_or_option={DParameter.windowLogMax:2**31}) with self.assertRaises(ValueError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), level_or_option={444:333}) with self.assertRaises(TypeError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), zstd_dict={1:2}) with self.assertRaises(TypeError): ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), zstd_dict=b'dict123456') def test_init_sizes_arg(self): with _check_deprecated(self): with ZstdFile(BytesIO(), 'r', read_size=1): pass with _check_deprecated(self): ZstdFile(BytesIO(), 'r', read_size=0) with _check_deprecated(self): ZstdFile(BytesIO(), 'r', read_size=-1) with _check_deprecated(self): ZstdFile(BytesIO(), 'r', read_size=(10,)) with _check_deprecated(self): ZstdFile(BytesIO(), 'w', read_size=10) with _check_deprecated(self): with ZstdFile(BytesIO(), 'w', write_size=1): pass with _check_deprecated(self): ZstdFile(BytesIO(), 'w', write_size=0) with _check_deprecated(self): ZstdFile(BytesIO(), 'w', write_size=-1) with _check_deprecated(self): ZstdFile(BytesIO(), 'w', write_size=(10,)) with _check_deprecated(self): ZstdFile(BytesIO(), 'r', write_size=10) def test_init_close_fp(self): # get a temp file name with tempfile.NamedTemporaryFile(delete=False) as tmp_f: tmp_f.write(DAT_130K_C) filename = tmp_f.name with self.assertRaises(TypeError): ZstdFile(filename, level_or_option={'a':'b'}) # for PyPy gc.collect() os.remove(filename) def test_close(self): with BytesIO(COMPRESSED_100_PLUS_32KB) as src: f = ZstdFile(src) f.close() # ZstdFile.close() should not close the underlying file object. self.assertFalse(src.closed) # Try closing an already-closed ZstdFile. f.close() self.assertFalse(src.closed) # Test with a real file on disk, opened directly by ZstdFile. with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name f = ZstdFile(filename) fp = f._fp f.close() # Here, ZstdFile.close() *should* close the underlying file object. self.assertTrue(fp.closed) # Try closing an already-closed ZstdFile. f.close() os.remove(filename) def test_closed(self): f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) try: self.assertFalse(f.closed) f.read() self.assertFalse(f.closed) finally: f.close() self.assertTrue(f.closed) f = ZstdFile(BytesIO(), "w") try: self.assertFalse(f.closed) finally: f.close() self.assertTrue(f.closed) def test_fileno(self): # 1 f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) try: self.assertRaises(UnsupportedOperation, f.fileno) finally: f.close() self.assertRaises(ValueError, f.fileno) # 2 with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name f = ZstdFile(filename) try: self.assertEqual(f.fileno(), f._fp.fileno()) self.assertIsInstance(f.fileno(), int) finally: f.close() self.assertRaises(ValueError, f.fileno) os.remove(filename) # 3, no .fileno() method class C: def read(self, size=-1): return b'123' with ZstdFile(C(), 'rb') as f: with self.assertRaisesRegex(AttributeError, r'fileno'): f.fileno() def test_name(self): # 1 f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) try: with self.assertRaises(AttributeError): f.name finally: f.close() with self.assertRaises(ValueError): f.name # 2 with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): filename = pathlib.Path(tmp_f.name) else: filename = tmp_f.name f = ZstdFile(filename) try: self.assertEqual(f.name, f._fp.name) self.assertIsInstance(f.name, str) finally: f.close() with self.assertRaises(ValueError): f.name os.remove(filename) # 3, no .filename property class C: def read(self, size=-1): return b'123' with ZstdFile(C(), 'rb') as f: with self.assertRaisesRegex(AttributeError, r'name'): f.name def test_seekable(self): f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) try: self.assertTrue(f.seekable()) f.read() self.assertTrue(f.seekable()) finally: f.close() self.assertRaises(ValueError, f.seekable) f = ZstdFile(BytesIO(), "w") try: self.assertFalse(f.seekable()) finally: f.close() self.assertRaises(ValueError, f.seekable) src = BytesIO(COMPRESSED_100_PLUS_32KB) src.seekable = lambda: False f = ZstdFile(src) try: self.assertFalse(f.seekable()) finally: f.close() self.assertRaises(ValueError, f.seekable) def test_readable(self): f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) try: self.assertTrue(f.readable()) f.read() self.assertTrue(f.readable()) finally: f.close() self.assertRaises(ValueError, f.readable) f = ZstdFile(BytesIO(), "w") try: self.assertFalse(f.readable()) finally: f.close() self.assertRaises(ValueError, f.readable) def test_writable(self): f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) try: self.assertFalse(f.writable()) f.read() self.assertFalse(f.writable()) finally: f.close() self.assertRaises(ValueError, f.writable) f = ZstdFile(BytesIO(), "w") try: self.assertTrue(f.writable()) finally: f.close() self.assertRaises(ValueError, f.writable) def test_read(self): with ZstdFile(BytesIO(self.FRAME_42)) as f: self.assertEqual(f.read(), self.DECOMPRESSED_42) self.assertEqual(f.read(), b"") with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) self.assertEqual(f.read(), b"") with _check_deprecated(self): with ZstdFile(BytesIO(DAT_130K_C), read_size=64*1024) as f: self.assertEqual(f.read(), DAT_130K_D) with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), level_or_option={DParameter.windowLogMax:20}) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) self.assertEqual(f.read(), b"") self.assertEqual(f.read(10), b"") def test_read_0(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: self.assertEqual(f.read(0), b"") self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), level_or_option={DParameter.windowLogMax:20}) as f: self.assertEqual(f.read(0), b"") # empty file with ZstdFile(BytesIO(b'')) as f: self.assertEqual(f.read(0), b"") with self.assertRaises(EOFError): f.read(10) with ZstdFile(BytesIO(b'')) as f: with self.assertRaises(EOFError): f.read(10) def test_read_10(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: chunks = [] while True: result = f.read(10) if not result: break self.assertLessEqual(len(result), 10) chunks.append(result) self.assertEqual(b"".join(chunks), DECOMPRESSED_100_PLUS_32KB) def test_read_multistream(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB * 5)) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB * 5) with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB + SKIPPABLE_FRAME)) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB + COMPRESSED_DAT)) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB + DECOMPRESSED_DAT) def test_read_incomplete(self): with ZstdFile(BytesIO(DAT_130K_C[:-200])) as f: self.assertRaises(EOFError, f.read) # Trailing data isn't a valid compressed stream with ZstdFile(BytesIO(self.FRAME_42 + b'12345')) as f: self.assertRaises(ZstdError, f.read) with ZstdFile(BytesIO(SKIPPABLE_FRAME + b'12345')) as f: self.assertRaises(ZstdError, f.read) def test_read_truncated(self): # Drop stream epilogue: 4 bytes checksum truncated = DAT_130K_C[:-4] with ZstdFile(BytesIO(truncated)) as f: self.assertRaises(EOFError, f.read) with ZstdFile(BytesIO(truncated)) as f: # this is an important test, make sure it doesn't raise EOFError. self.assertEqual(f.read(130*1024), DAT_130K_D) with self.assertRaises(EOFError): f.read(1) # Incomplete header for i in range(1, 20): with ZstdFile(BytesIO(truncated[:i])) as f: self.assertRaises(EOFError, f.read, 1) def test_read_bad_args(self): f = ZstdFile(BytesIO(COMPRESSED_DAT)) f.close() self.assertRaises(ValueError, f.read) with ZstdFile(BytesIO(), "w") as f: self.assertRaises(ValueError, f.read) with ZstdFile(BytesIO(COMPRESSED_DAT)) as f: self.assertRaises(TypeError, f.read, float()) def test_read_bad_data(self): with ZstdFile(BytesIO(COMPRESSED_BOGUS)) as f: self.assertRaises(ZstdError, f.read) def test_read_exception(self): class C: def read(self, size=-1): raise OSError with ZstdFile(C()) as f: with self.assertRaises(OSError): f.read(10) def test_read1(self): with ZstdFile(BytesIO(DAT_130K_C)) as f: blocks = [] while True: result = f.read1() if not result: break blocks.append(result) self.assertEqual(b"".join(blocks), DAT_130K_D) self.assertEqual(f.read1(), b"") def test_read1_0(self): with ZstdFile(BytesIO(COMPRESSED_DAT)) as f: self.assertEqual(f.read1(0), b"") def test_read1_10(self): with ZstdFile(BytesIO(COMPRESSED_DAT)) as f: blocks = [] while True: result = f.read1(10) if not result: break blocks.append(result) self.assertEqual(b"".join(blocks), DECOMPRESSED_DAT) self.assertEqual(f.read1(), b"") def test_read1_multistream(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB * 5)) as f: blocks = [] while True: result = f.read1() if not result: break blocks.append(result) self.assertEqual(b"".join(blocks), DECOMPRESSED_100_PLUS_32KB * 5) self.assertEqual(f.read1(), b"") def test_read1_bad_args(self): f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) f.close() self.assertRaises(ValueError, f.read1) with ZstdFile(BytesIO(), "w") as f: self.assertRaises(ValueError, f.read1) with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: self.assertRaises(TypeError, f.read1, None) def test_readinto(self): arr = array.array("I", range(100)) self.assertEqual(len(arr), 100) self.assertEqual(len(arr) * arr.itemsize, 400) ba = bytearray(300) with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: # 0 length output buffer self.assertEqual(f.readinto(ba[0:0]), 0) # use correct length for buffer protocol object self.assertEqual(f.readinto(arr), 400) self.assertEqual(arr.tobytes(), DECOMPRESSED_100_PLUS_32KB[:400]) # normal readinto self.assertEqual(f.readinto(ba), 300) self.assertEqual(ba, DECOMPRESSED_100_PLUS_32KB[400:700]) def test_peek(self): with ZstdFile(BytesIO(DAT_130K_C)) as f: result = f.peek() self.assertGreater(len(result), 0) self.assertTrue(DAT_130K_D.startswith(result)) self.assertEqual(f.read(), DAT_130K_D) with ZstdFile(BytesIO(DAT_130K_C)) as f: result = f.peek(10) self.assertGreater(len(result), 0) self.assertTrue(DAT_130K_D.startswith(result)) self.assertEqual(f.read(), DAT_130K_D) def test_peek_bad_args(self): with ZstdFile(BytesIO(), "w") as f: self.assertRaises(ValueError, f.peek) def test_iterator(self): with BytesIO(THIS_FILE_BYTES) as f: lines = f.readlines() compressed = compress(THIS_FILE_BYTES) # iter with ZstdFile(BytesIO(compressed)) as f: self.assertListEqual(list(iter(f)), lines) # readline with ZstdFile(BytesIO(compressed)) as f: for line in lines: self.assertEqual(f.readline(), line) self.assertEqual(f.readline(), b'') self.assertEqual(f.readline(), b'') # readlines with ZstdFile(BytesIO(compressed)) as f: self.assertListEqual(f.readlines(), lines) def test_decompress_limited(self): _ZSTD_DStreamInSize = 128*1024 + 3 bomb = compress(b'\0' * int(2e6), level_or_option=10) self.assertLess(len(bomb), _ZSTD_DStreamInSize) decomp = ZstdFile(BytesIO(bomb)) self.assertEqual(decomp.read(1), b'\0') # BufferedReader uses 128 KiB buffer in __init__.py max_decomp = 128*1024 self.assertLessEqual(decomp._buffer.raw.tell(), max_decomp, "Excessive amount of data was decompressed") def test_write(self): with BytesIO() as dst: with ZstdFile(dst, "w") as f: f.write(THIS_FILE_BYTES) comp = ZstdCompressor() expected = comp.compress(THIS_FILE_BYTES) + comp.flush() self.assertEqual(dst.getvalue(), expected) with BytesIO() as dst: with ZstdFile(dst, "w", level_or_option=12) as f: f.write(THIS_FILE_BYTES) comp = ZstdCompressor(12) expected = comp.compress(THIS_FILE_BYTES) + comp.flush() self.assertEqual(dst.getvalue(), expected) with BytesIO() as dst: with ZstdFile(dst, "w", level_or_option={CParameter.checksumFlag:1}) as f: f.write(THIS_FILE_BYTES) comp = ZstdCompressor({CParameter.checksumFlag:1}) expected = comp.compress(THIS_FILE_BYTES) + comp.flush() self.assertEqual(dst.getvalue(), expected) with BytesIO() as dst: option = {CParameter.compressionLevel:-5, CParameter.checksumFlag:1} with _check_deprecated(self): with ZstdFile(dst, "w", level_or_option=option, write_size=1024) as f: f.write(THIS_FILE_BYTES) comp = ZstdCompressor(option) expected = comp.compress(THIS_FILE_BYTES) + comp.flush() self.assertEqual(dst.getvalue(), expected) def test_write_empty_frame(self): # .FLUSH_FRAME generates an empty content frame c = ZstdCompressor() self.assertNotEqual(c.flush(c.FLUSH_FRAME), b'') self.assertNotEqual(c.flush(c.FLUSH_FRAME), b'') # don't generate empty content frame bo = BytesIO() with ZstdFile(bo, 'w') as f: pass self.assertEqual(bo.getvalue(), b'') bo = BytesIO() with ZstdFile(bo, 'w') as f: f.flush(f.FLUSH_FRAME) self.assertEqual(bo.getvalue(), b'') # if .write(b''), generate empty content frame bo = BytesIO() with ZstdFile(bo, 'w') as f: f.write(b'') self.assertNotEqual(bo.getvalue(), b'') # has an empty content frame bo = BytesIO() with ZstdFile(bo, 'w') as f: f.flush(f.FLUSH_BLOCK) self.assertNotEqual(bo.getvalue(), b'') def test_write_empty_block(self): # If no internal data, .FLUSH_BLOCK return b''. c = ZstdCompressor() self.assertEqual(c.flush(c.FLUSH_BLOCK), b'') self.assertNotEqual(c.compress(b'123', c.FLUSH_BLOCK), b'') self.assertEqual(c.flush(c.FLUSH_BLOCK), b'') self.assertEqual(c.compress(b''), b'') self.assertEqual(c.compress(b''), b'') self.assertEqual(c.flush(c.FLUSH_BLOCK), b'') # mode = .last_mode bo = BytesIO() with ZstdFile(bo, 'w') as f: f.write(b'123') f.flush(f.FLUSH_BLOCK) fp_pos = f._fp.tell() self.assertNotEqual(fp_pos, 0) f.flush(f.FLUSH_BLOCK) self.assertEqual(f._fp.tell(), fp_pos) # mode != .last_mode bo = BytesIO() with ZstdFile(bo, 'w') as f: f.flush(f.FLUSH_BLOCK) self.assertEqual(f._fp.tell(), 0) f.write(b'') f.flush(f.FLUSH_BLOCK) self.assertEqual(f._fp.tell(), 0) def test_write_101(self): with BytesIO() as dst: with ZstdFile(dst, "w") as f: for start in range(0, len(THIS_FILE_BYTES), 101): f.write(THIS_FILE_BYTES[start:start+101]) comp = ZstdCompressor() expected = comp.compress(THIS_FILE_BYTES) + comp.flush() self.assertEqual(dst.getvalue(), expected) def test_write_append(self): def comp(data): comp = ZstdCompressor() return comp.compress(data) + comp.flush() part1 = THIS_FILE_BYTES[:1024] part2 = THIS_FILE_BYTES[1024:1536] part3 = THIS_FILE_BYTES[1536:] expected = b"".join(comp(x) for x in (part1, part2, part3)) with BytesIO() as dst: with ZstdFile(dst, "w") as f: f.write(part1) with ZstdFile(dst, "a") as f: f.write(part2) with ZstdFile(dst, "a") as f: f.write(part3) self.assertEqual(dst.getvalue(), expected) def test_write_bad_args(self): f = ZstdFile(BytesIO(), "w") f.close() self.assertRaises(ValueError, f.write, b"foo") with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB), "r") as f: self.assertRaises(ValueError, f.write, b"bar") with ZstdFile(BytesIO(), "w") as f: self.assertRaises(TypeError, f.write, None) self.assertRaises(TypeError, f.write, "text") self.assertRaises(TypeError, f.write, 789) def test_writelines(self): def comp(data): comp = ZstdCompressor() return comp.compress(data) + comp.flush() with BytesIO(THIS_FILE_BYTES) as f: lines = f.readlines() with BytesIO() as dst: with ZstdFile(dst, "w") as f: f.writelines(lines) expected = comp(THIS_FILE_BYTES) self.assertEqual(dst.getvalue(), expected) def test_seek_forward(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: f.seek(555) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB[555:]) def test_seek_forward_across_streams(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB * 2)) as f: f.seek(len(DECOMPRESSED_100_PLUS_32KB) + 123) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB[123:]) def test_seek_forward_relative_to_current(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: f.read(100) f.seek(1236, 1) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB[1336:]) def test_seek_forward_relative_to_end(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: f.seek(-555, 2) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB[-555:]) def test_seek_backward(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: f.read(1001) f.seek(211) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB[211:]) def test_seek_backward_across_streams(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB * 2)) as f: f.read(len(DECOMPRESSED_100_PLUS_32KB) + 333) f.seek(737) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB[737:] + DECOMPRESSED_100_PLUS_32KB) def test_seek_backward_relative_to_end(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: f.seek(-150, 2) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB[-150:]) def test_seek_past_end(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: f.seek(len(DECOMPRESSED_100_PLUS_32KB) + 9001) self.assertEqual(f.tell(), len(DECOMPRESSED_100_PLUS_32KB)) self.assertEqual(f.read(), b"") def test_seek_past_start(self): with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: f.seek(-88) self.assertEqual(f.tell(), 0) self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) def test_seek_bad_args(self): f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) f.close() self.assertRaises(ValueError, f.seek, 0) with ZstdFile(BytesIO(), "w") as f: self.assertRaises(ValueError, f.seek, 0) with ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) as f: self.assertRaises(ValueError, f.seek, 0, 3) # io.BufferedReader raises TypeError instead of ValueError self.assertRaises((TypeError, ValueError), f.seek, 9, ()) self.assertRaises(TypeError, f.seek, None) self.assertRaises(TypeError, f.seek, b"derp") def test_seek_not_seekable(self): class C(BytesIO): def seekable(self): return False obj = C(COMPRESSED_100_PLUS_32KB) with ZstdFile(obj, 'r') as f: d = f.read(1) self.assertFalse(f.seekable()) with self.assertRaisesRegex(io.UnsupportedOperation, 'not seekable'): f.seek(0) d += f.read() self.assertEqual(d, DECOMPRESSED_100_PLUS_32KB) def test_tell(self): with ZstdFile(BytesIO(DAT_130K_C)) as f: pos = 0 while True: self.assertEqual(f.tell(), pos) result = f.read(random.randint(171, 189)) if not result: break pos += len(result) self.assertEqual(f.tell(), len(DAT_130K_D)) with ZstdFile(BytesIO(), "w") as f: for pos in range(0, len(DAT_130K_D), 143): self.assertEqual(f.tell(), pos) f.write(DAT_130K_D[pos:pos+143]) self.assertEqual(f.tell(), len(DAT_130K_D)) def test_tell_bad_args(self): f = ZstdFile(BytesIO(COMPRESSED_100_PLUS_32KB)) f.close() self.assertRaises(ValueError, f.tell) def test_file_dict(self): # default bi = BytesIO() with ZstdFile(bi, 'w', zstd_dict=TRAINED_DICT) as f: f.write(SAMPLES[0]) bi.seek(0) with ZstdFile(bi, zstd_dict=TRAINED_DICT) as f: dat = f.read() self.assertEqual(dat, SAMPLES[0]) # .as_(un)digested_dict bi = BytesIO() with ZstdFile(bi, 'w', zstd_dict=TRAINED_DICT.as_digested_dict) as f: f.write(SAMPLES[0]) bi.seek(0) with ZstdFile(bi, zstd_dict=TRAINED_DICT.as_undigested_dict) as f: dat = f.read() self.assertEqual(dat, SAMPLES[0]) def test_file_prefix(self): bi = BytesIO() with ZstdFile(bi, 'w', zstd_dict=TRAINED_DICT.as_prefix) as f: f.write(SAMPLES[0]) bi.seek(0) with ZstdFile(bi, zstd_dict=TRAINED_DICT.as_prefix) as f: dat = f.read() self.assertEqual(dat, SAMPLES[0]) def test_UnsupportedOperation(self): # 1 with ZstdFile(BytesIO(), 'r') as f: with self.assertRaises(io.UnsupportedOperation): f.write(b'1234') # 2 class T: def read(self, size): return b'a' * size with self.assertRaises(TypeError): ZstdFile(T(), 'w') # 3 with ZstdFile(BytesIO(), 'w') as f: with self.assertRaises(io.UnsupportedOperation): f.read(100) with self.assertRaises(io.UnsupportedOperation): f.seek(100) self.assertEqual(f.closed, True) with self.assertRaises(ValueError): f.readable() with self.assertRaises(ValueError): f.tell() with self.assertRaises(ValueError): f.read(100) def test_read_readinto_readinto1(self): lst = [] with ZstdFile(BytesIO(COMPRESSED_THIS_FILE*5)) as f: while True: method = random.randint(0, 2) size = random.randint(0, 300) if method == 0: dat = f.read(size) if not dat and size: break lst.append(dat) elif method == 1: ba = bytearray(size) read_size = f.readinto(ba) if read_size == 0 and size: break lst.append(bytes(ba[:read_size])) elif method == 2: ba = bytearray(size) read_size = f.readinto1(ba) if read_size == 0 and size: break lst.append(bytes(ba[:read_size])) self.assertEqual(b''.join(lst), THIS_FILE_BYTES*5) def test_zstdfile_flush(self): # closed f = ZstdFile(BytesIO(), 'w') f.close() with self.assertRaises(ValueError): f.flush() # read with ZstdFile(BytesIO(), 'r') as f: # does nothing for read-only stream f.flush() # write DAT = b'abcd' bi = BytesIO() with ZstdFile(bi, 'w') as f: self.assertEqual(f.write(DAT), len(DAT)) self.assertEqual(f.tell(), len(DAT)) self.assertEqual(bi.tell(), 0) # not enough for a block self.assertEqual(f.flush(), None) self.assertEqual(f.tell(), len(DAT)) self.assertGreater(bi.tell(), 0) # flushed # write, no .flush() method class C: def write(self, b): return len(b) with ZstdFile(C(), 'w') as f: self.assertEqual(f.write(DAT), len(DAT)) self.assertEqual(f.tell(), len(DAT)) self.assertEqual(f.flush(), None) self.assertEqual(f.tell(), len(DAT)) def test_zstdfile_flush_mode(self): self.assertEqual(ZstdFile.FLUSH_BLOCK, ZstdCompressor.FLUSH_BLOCK) self.assertEqual(ZstdFile.FLUSH_FRAME, ZstdCompressor.FLUSH_FRAME) with self.assertRaises(AttributeError): ZstdFile.CONTINUE bo = BytesIO() with ZstdFile(bo, 'w') as f: # flush block f.write(b'123') self.assertIsNone(f.flush(f.FLUSH_BLOCK)) p1 = bo.tell() # mode == .last_mode, should return self.assertIsNone(f.flush()) p2 = bo.tell() self.assertEqual(p1, p2) # flush frame f.write(b'456') self.assertIsNone(f.flush(mode=f.FLUSH_FRAME)) # flush frame f.write(b'789') self.assertIsNone(f.flush(f.FLUSH_FRAME)) p1 = bo.tell() # mode == .last_mode, should return self.assertIsNone(f.flush(f.FLUSH_FRAME)) p2 = bo.tell() self.assertEqual(p1, p2) self.assertEqual(decompress(bo.getvalue()), b'123456789') bo = BytesIO() with ZstdFile(bo, 'w') as f: f.write(b'123') with self.assertRaisesRegex(ValueError, r'\.FLUSH_.*?\.FLUSH_'): f.flush(ZstdCompressor.CONTINUE) with self.assertRaises(ValueError): f.flush(-1) with self.assertRaises(ValueError): f.flush(123456) with self.assertRaises(TypeError): f.flush(node=ZstdCompressor.CONTINUE) with self.assertRaises((TypeError, ValueError)): f.flush('FLUSH_FRAME') with self.assertRaises(TypeError): f.flush(b'456', f.FLUSH_BLOCK) def test_zstdfile_truncate(self): with ZstdFile(BytesIO(), 'w') as f: with self.assertRaises(io.UnsupportedOperation): f.truncate(200) def test_zstdfile_iter_issue45475(self): lines = [l for l in ZstdFile(BytesIO(COMPRESSED_THIS_FILE))] self.assertGreater(len(lines), 0) def test_append_new_file(self): with tempfile.NamedTemporaryFile(delete=True) as tmp_f: filename = tmp_f.name with ZstdFile(filename, 'a') as f: pass self.assertTrue(os.path.isfile(filename)) os.remove(filename) class OpenTestCase(unittest.TestCase): def test_binary_modes(self): with open(BytesIO(COMPRESSED_100_PLUS_32KB), "rb") as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) with BytesIO() as bio: with open(bio, "wb") as f: f.write(DECOMPRESSED_100_PLUS_32KB) file_data = decompress(bio.getvalue()) self.assertEqual(file_data, DECOMPRESSED_100_PLUS_32KB) with open(bio, "ab") as f: f.write(DECOMPRESSED_100_PLUS_32KB) file_data = decompress(bio.getvalue()) self.assertEqual(file_data, DECOMPRESSED_100_PLUS_32KB * 2) def test_text_modes(self): # empty input with open(BytesIO(b''), "rt", encoding="utf-8", newline='\n') as reader: with self.assertRaises(EOFError): for _ in reader: pass # read uncompressed = THIS_FILE_STR.replace(os.linesep, "\n") with open(BytesIO(COMPRESSED_THIS_FILE), "rt", encoding="utf-8") as f: self.assertEqual(f.read(), uncompressed) with BytesIO() as bio: # write with open(bio, "wt", encoding="utf-8") as f: f.write(uncompressed) file_data = decompress(bio.getvalue()).decode("utf-8") self.assertEqual(file_data.replace(os.linesep, "\n"), uncompressed) # append with open(bio, "at", encoding="utf-8") as f: f.write(uncompressed) file_data = decompress(bio.getvalue()).decode("utf-8") self.assertEqual(file_data.replace(os.linesep, "\n"), uncompressed * 2) def test_bad_params(self): with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): TESTFN = pathlib.Path(tmp_f.name) else: TESTFN = tmp_f.name with self.assertRaises(ValueError): open(TESTFN, "") with self.assertRaises(ValueError): open(TESTFN, "rbt") with self.assertRaises(ValueError): open(TESTFN, "rb", encoding="utf-8") with self.assertRaises(ValueError): open(TESTFN, "rb", errors="ignore") with self.assertRaises(ValueError): open(TESTFN, "rb", newline="\n") os.remove(TESTFN) def test_option(self): option = {DParameter.windowLogMax:25} with open(BytesIO(COMPRESSED_100_PLUS_32KB), "rb", level_or_option=option) as f: self.assertEqual(f.read(), DECOMPRESSED_100_PLUS_32KB) option = {CParameter.compressionLevel:12} with BytesIO() as bio: with open(bio, "wb", level_or_option=option) as f: f.write(DECOMPRESSED_100_PLUS_32KB) file_data = decompress(bio.getvalue()) self.assertEqual(file_data, DECOMPRESSED_100_PLUS_32KB) def test_encoding(self): uncompressed = THIS_FILE_STR.replace(os.linesep, "\n") with BytesIO() as bio: with open(bio, "wt", encoding="utf-16-le") as f: f.write(uncompressed) file_data = decompress(bio.getvalue()).decode("utf-16-le") self.assertEqual(file_data.replace(os.linesep, "\n"), uncompressed) bio.seek(0) with open(bio, "rt", encoding="utf-16-le") as f: self.assertEqual(f.read().replace(os.linesep, "\n"), uncompressed) def test_encoding_error_handler(self): with BytesIO(compress(b"foo\xffbar")) as bio: with open(bio, "rt", encoding="ascii", errors="ignore") as f: self.assertEqual(f.read(), "foobar") def test_newline(self): # Test with explicit newline (universal newline mode disabled). text = THIS_FILE_STR.replace(os.linesep, "\n") with BytesIO() as bio: with open(bio, "wt", encoding="utf-8", newline="\n") as f: f.write(text) bio.seek(0) with open(bio, "rt", encoding="utf-8", newline="\r") as f: self.assertEqual(f.readlines(), [text]) def test_x_mode(self): with tempfile.NamedTemporaryFile(delete=False) as tmp_f: if sys.version_info >= (3, 6): TESTFN = pathlib.Path(tmp_f.name) else: TESTFN = tmp_f.name for mode in ("x", "xb", "xt"): os.remove(TESTFN) if mode == "xt": encoding = "utf-8" else: encoding = None with open(TESTFN, mode, encoding=encoding): pass with self.assertRaises(FileExistsError): with open(TESTFN, mode): pass os.remove(TESTFN) def test_open_dict(self): # default bi = BytesIO() with open(bi, 'w', zstd_dict=TRAINED_DICT) as f: f.write(SAMPLES[0]) bi.seek(0) with open(bi, zstd_dict=TRAINED_DICT) as f: dat = f.read() self.assertEqual(dat, SAMPLES[0]) # .as_(un)digested_dict bi = BytesIO() with open(bi, 'w', zstd_dict=TRAINED_DICT.as_digested_dict) as f: f.write(SAMPLES[0]) bi.seek(0) with open(bi, zstd_dict=TRAINED_DICT.as_undigested_dict) as f: dat = f.read() self.assertEqual(dat, SAMPLES[0]) # invalid dictionary bi = BytesIO() with self.assertRaisesRegex(TypeError, 'zstd_dict'): open(bi, 'w', zstd_dict={1:2, 2:3}) with self.assertRaisesRegex(TypeError, 'zstd_dict'): open(bi, 'w', zstd_dict=b'1234567890') def test_open_prefix(self): bi = BytesIO() with open(bi, 'w', zstd_dict=TRAINED_DICT.as_prefix) as f: f.write(SAMPLES[0]) bi.seek(0) with open(bi, zstd_dict=TRAINED_DICT.as_prefix) as f: dat = f.read() self.assertEqual(dat, SAMPLES[0]) def test_buffer_protocol(self): # don't use len() for buffer protocol objects arr = array.array("i", range(1000)) LENGTH = len(arr) * arr.itemsize with open(BytesIO(), "wb") as f: self.assertEqual(f.write(arr), LENGTH) self.assertEqual(f.tell(), LENGTH) class StreamFunctionsTestCase(unittest.TestCase): def test_compress_stream(self): bi = BytesIO(THIS_FILE_BYTES) bo = BytesIO() with _check_deprecated(self): ret = compress_stream(bi, bo, level_or_option=1, zstd_dict=TRAINED_DICT, pledged_input_size=2**64-1, # backward compatible read_size=200*KB, write_size=200*KB) output = bo.getvalue() self.assertEqual(ret, (len(THIS_FILE_BYTES), len(output))) self.assertEqual(decompress(output, TRAINED_DICT), THIS_FILE_BYTES) bi.close() bo.close() # empty input bi = BytesIO() bo = BytesIO() with _check_deprecated(self): ret = compress_stream(bi, bo, pledged_input_size=None) self.assertEqual(ret, (0, 0)) self.assertEqual(bo.getvalue(), b'') bi.close() bo.close() # wrong pledged_input_size size bi = BytesIO(THIS_FILE_BYTES) bo = BytesIO() with self.assertRaises(ZstdError): with _check_deprecated(self): compress_stream(bi, bo, pledged_input_size=len(THIS_FILE_BYTES)-1) bi.close() bo.close() bi = BytesIO(THIS_FILE_BYTES) bo = BytesIO() with self.assertRaises(ZstdError): with _check_deprecated(self): compress_stream(bi, bo, pledged_input_size=len(THIS_FILE_BYTES)+1) bi.close() bo.close() # wrong arguments b1 = BytesIO() b2 = BytesIO() with self.assertRaisesRegex(TypeError, r'input_stream'): with _check_deprecated(self): compress_stream(123, b1) with self.assertRaisesRegex(TypeError, r'output_stream'): with _check_deprecated(self): compress_stream(b1, 123) with self.assertRaisesRegex(TypeError, r'options'): with _check_deprecated(self): compress_stream(b1, b2, level_or_option='3') with self.assertRaisesRegex(TypeError, r'zstd_dict'): with _check_deprecated(self): compress_stream(b1, b2, zstd_dict={}) with self.assertRaisesRegex(TypeError, r'zstd_dict'): with _check_deprecated(self): compress_stream(b1, b2, zstd_dict=b'1234567890') with self.assertRaisesRegex(ValueError, r'size argument'): with _check_deprecated(self): compress_stream(b1, b2, pledged_input_size=-1) with self.assertRaisesRegex(ValueError, r'size argument'): with _check_deprecated(self): compress_stream(b1, b2, pledged_input_size=2**64+1) with self.assertRaisesRegex(ValueError, r'read_size'): with _check_deprecated(self): compress_stream(b1, b2, read_size=-1) with _check_deprecated(self): compress_stream(b1, b2, write_size=2**64+1) with self.assertRaisesRegex(TypeError, r'callback'): with _check_deprecated(self): compress_stream(b1, None, callback=None) b1.close() b2.close() def test_compress_stream_callback(self): in_lst = [] out_lst = [] def func(total_input, total_output, read_data, write_data): in_lst.append(read_data.tobytes()) out_lst.append(write_data.tobytes()) bi = BytesIO(THIS_FILE_BYTES) bo = BytesIO() option = {CParameter.compressionLevel : 1, CParameter.checksumFlag : 1} with _check_deprecated(self): ret = compress_stream(bi, bo, level_or_option=option, read_size=701, write_size=101, callback=func) bi.close() bo.close() in_dat = b''.join(in_lst) out_dat = b''.join(out_lst) self.assertEqual(ret, (len(in_dat), len(out_dat))) self.assertEqual(in_dat, THIS_FILE_BYTES) self.assertEqual(decompress(out_dat), THIS_FILE_BYTES) @unittest.skipIf(not zstd_support_multithread, "zstd build doesn't support multi-threaded compression") def test_compress_stream_multi_thread(self): size = 40*1024*1024 b = THIS_FILE_BYTES * (size // len(THIS_FILE_BYTES)) option = {CParameter.compressionLevel : 1, CParameter.checksumFlag : 1, CParameter.nbWorkers : 2} bi = BytesIO(b) bo = BytesIO() with _check_deprecated(self): ret = compress_stream(bi, bo, level_or_option=option, pledged_input_size=len(b)) output = bo.getvalue() self.assertEqual(ret, (len(b), len(output))) self.assertEqual(decompress(output), b) bi.close() bo.close() def test_decompress_stream(self): bi = BytesIO(COMPRESSED_THIS_FILE) bo = BytesIO() with _check_deprecated(self): ret = decompress_stream(bi, bo, option={DParameter.windowLogMax:26}, read_size=200*KB, write_size=200*KB) self.assertEqual(ret, (len(COMPRESSED_THIS_FILE), len(THIS_FILE_BYTES))) self.assertEqual(bo.getvalue(), THIS_FILE_BYTES) bi.close() bo.close() # empty input bi = BytesIO() bo = BytesIO() with _check_deprecated(self): ret = decompress_stream(bi, bo) self.assertEqual(ret, (0, 0)) self.assertEqual(bo.getvalue(), b'') bi.close() bo.close() # wrong arguments b1 = BytesIO() b2 = BytesIO() with self.assertRaisesRegex(TypeError, r'input_stream'): with _check_deprecated(self): decompress_stream(123, b1) with self.assertRaisesRegex(TypeError, r'output_stream'): with _check_deprecated(self): decompress_stream(b1, 123) with self.assertRaisesRegex(TypeError, r'zstd_dict'): with _check_deprecated(self): decompress_stream(b1, b2, zstd_dict={}) with self.assertRaisesRegex(TypeError, r'zstd_dict'): with _check_deprecated(self): decompress_stream(b1, b2, zstd_dict=b'1234567890') with self.assertRaisesRegex(TypeError, r'option'): with _check_deprecated(self): decompress_stream(b1, b2, option=3) with self.assertRaisesRegex(ValueError, r'read_size'): with _check_deprecated(self): decompress_stream(b1, b2, read_size=-1) with _check_deprecated(self): decompress_stream(b1, b2, write_size=2**64+1) with self.assertRaisesRegex(TypeError, r'callback'): with _check_deprecated(self): decompress_stream(b1, None, callback=None) b1.close() b2.close() def test_decompress_stream_callback(self): in_lst = [] out_lst = [] def func(total_input, total_output, read_data, write_data): in_lst.append(read_data.tobytes()) out_lst.append(write_data.tobytes()) bi = BytesIO(COMPRESSED_THIS_FILE) bo = BytesIO() option = {DParameter.windowLogMax : 26} with _check_deprecated(self): ret = decompress_stream(bi, bo, option=option, read_size=701, write_size=401, callback=func) bi.close() bo.close() in_dat = b''.join(in_lst) out_dat = b''.join(out_lst) self.assertEqual(ret, (len(in_dat), len(out_dat))) self.assertEqual(in_dat, COMPRESSED_THIS_FILE) self.assertEqual(out_dat, THIS_FILE_BYTES) def test_decompress_stream_multi_frames(self): dat = (COMPRESSED_100_PLUS_32KB + SKIPPABLE_FRAME) * 2 bi = BytesIO(dat) bo = BytesIO() with _check_deprecated(self): ret = decompress_stream(bi, bo, read_size=200*KB, write_size=50*KB) output = bo.getvalue() self.assertEqual(ret, (len(dat), len(output))) self.assertEqual(output, DECOMPRESSED_100_PLUS_32KB + DECOMPRESSED_100_PLUS_32KB) bi.close() bo.close() # incomplete frame bi = BytesIO(dat[:-1]) bo = BytesIO() with self.assertRaisesRegex(ZstdError, 'incomplete'): with _check_deprecated(self): decompress_stream(bi, bo) bi.close() bo.close() def test_stream_return_wrong_value(self): # wrong type class M: def readinto(self, b): return 'a' def write(self, b): return 'a' with self.assertRaises(TypeError): with _check_deprecated(self): compress_stream(M(), BytesIO()) with self.assertRaises(TypeError): with _check_deprecated(self): decompress_stream(M(), BytesIO()) # wrong value class N: def __init__(self, ret_value): self.ret_value = ret_value def readinto(self, b): return self.ret_value def write(self, b): return self.ret_value # < 0 with self.assertRaises(TypeError): with _check_deprecated(self): compress_stream(N(-1), BytesIO()) with self.assertRaises(TypeError): with _check_deprecated(self): decompress_stream(N(-2), BytesIO()) # should > upper bound (~128 KiB) with self.assertRaises(TypeError): with _check_deprecated(self): compress_stream(N(10000000), BytesIO()) with self.assertRaises(TypeError): with _check_deprecated(self): decompress_stream(N(10000000), BytesIO()) def test_empty_input_no_callback(self): def cb(a,b,c,d): self.fail('callback function should not be called') # callback function will not be called for empty input, # it's a promised behavior. with _check_deprecated(self): compress_stream(io.BytesIO(b''), io.BytesIO(), callback=cb) with _check_deprecated(self): decompress_stream(io.BytesIO(b''), io.BytesIO(), callback=cb) def test_stream_dict(self): zd = ZstdDict(THIS_FILE_BYTES, is_raw=True) # default with BytesIO(THIS_FILE_BYTES) as bi, BytesIO() as bo: with _check_deprecated(self): ret = compress_stream(bi, bo, zstd_dict=zd) compressed = bo.getvalue() self.assertEqual(ret, (len(THIS_FILE_BYTES), len(compressed))) with BytesIO(compressed) as bi, BytesIO() as bo: with _check_deprecated(self): ret = decompress_stream(bi, bo, zstd_dict=zd) decompressed = bo.getvalue() self.assertEqual(ret, (len(compressed), len(decompressed))) self.assertEqual(decompressed, THIS_FILE_BYTES) # .as_(un)digested_dict with BytesIO(THIS_FILE_BYTES) as bi, BytesIO() as bo: with _check_deprecated(self): ret = compress_stream(bi, bo, zstd_dict=zd.as_undigested_dict) compressed = bo.getvalue() self.assertEqual(ret, (len(THIS_FILE_BYTES), len(compressed))) with BytesIO(compressed) as bi, BytesIO() as bo: with _check_deprecated(self): ret = decompress_stream(bi, bo, zstd_dict=zd.as_digested_dict) decompressed = bo.getvalue() self.assertEqual(ret, (len(compressed), len(decompressed))) self.assertEqual(decompressed, THIS_FILE_BYTES) def test_stream_prefix(self): zd = ZstdDict(THIS_FILE_BYTES, is_raw=True) with BytesIO(THIS_FILE_BYTES) as bi, BytesIO() as bo: with _check_deprecated(self): ret = compress_stream(bi, bo, zstd_dict=zd.as_prefix) compressed = bo.getvalue() self.assertEqual(ret, (len(THIS_FILE_BYTES), len(compressed))) with BytesIO(compressed) as bi, BytesIO() as bo: with _check_deprecated(self): ret = decompress_stream(bi, bo, zstd_dict=zd.as_prefix) decompressed = bo.getvalue() self.assertEqual(ret, (len(compressed), len(decompressed))) self.assertEqual(decompressed, THIS_FILE_BYTES) class CLITestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.tempdir = tempfile.TemporaryDirectory() cls.dir_name = cls.tempdir.name cls.samples_path = os.path.join(cls.dir_name, 'samples').rstrip(os.sep) os.mkdir(cls.samples_path) for i, sample in enumerate(SAMPLES): file_path = os.path.join(cls.samples_path, str(i) + '.dat') with open(file_path, 'wb') as f: f.write(sample) @classmethod def tearDownClass(cls): cls.tempdir.cleanup() assert not os.path.isdir(cls.dir_name) def test_help(self): cmd = [sys.executable, '-m', 'pyzstd', '-h'] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertIn(b'CLI of pyzstd module', result.stdout) def test_sequence(self): # train dict DICT_PATH = os.path.join(self.dir_name, 'dict') DICT_SIZE = 3*1024 cmd = [sys.executable, '-m', 'pyzstd', '--train', self.samples_path + os.sep + '*.dat', '-o', DICT_PATH, '--dictID', '1234567', '--maxdict', str(DICT_SIZE)] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertRegex(result.stdout, rb'(?s)Training succeeded.*?dict_id=1234567') self.assertLessEqual(os.path.getsize(DICT_PATH), DICT_SIZE) # compress cmd = [sys.executable, '-m', 'pyzstd', '--compress', os.path.join(self.samples_path, '1.dat'), '--level', '1', '-D', DICT_PATH] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertRegex(result.stdout, rb'output file:.*?1\.dat\.zst[\s\S]*?Compression succeeded') # decompress cmd = [sys.executable, '-m', 'pyzstd', '--decompress', os.path.join(self.samples_path, '1.dat.zst'), '-f', '-D', DICT_PATH] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertRegex(result.stdout, rb'output file:.*?1\.dat[\s\S]*?Decompression succeeded') # test cmd = [sys.executable, '-m', 'pyzstd', '--test', os.path.join(self.samples_path, '1.dat.zst'), '-D', DICT_PATH] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertRegex(result.stdout, rb'output file: None[\s\S]*?Decompression succeeded') # create tar archive cmd = [sys.executable, '-m', 'pyzstd', '--tar-input-dir', self.samples_path, '--level', '1'] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertRegex(result.stdout, rb'output file:.*?samples\.tar\.zst[\s\S]*?Archiving succeeded') # extract tar archive OUTPUT_DIR = os.path.join(self.dir_name, 'tar_output') os.mkdir(OUTPUT_DIR) cmd = [sys.executable, '-m', 'pyzstd', '--decompress', os.path.join(self.dir_name, 'samples.tar.zst'), '--tar-output-dir', OUTPUT_DIR] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertIn(b'Extraction succeeded', result.stdout) def test_level_range(self): OUTPUT_FILE = os.path.join(self.dir_name, 'level_range') # default cmd = [sys.executable, '-m', 'pyzstd', '--compress', os.path.join(self.samples_path, '1.dat'), '--output', OUTPUT_FILE, '-f'] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertIn(b' - compression level: 3', result.stdout) # out of range cmd = [sys.executable, '-m', 'pyzstd', '--compress', os.path.join(self.samples_path, '1.dat'), '--level', str(compressionLevel_values.min - 1), '--output', OUTPUT_FILE, '-f'] result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertIn(b'--level value should:', result.stderr) def test_long_range(self): OUTPUT_FILE = os.path.join(self.dir_name, 'long_range') # default cmd = [sys.executable, '-m', 'pyzstd', '--compress', os.path.join(self.samples_path, '1.dat'), '--long', '--output', OUTPUT_FILE, '-f'] result = subprocess.run(cmd, stdout=subprocess.PIPE) self.assertIn(b' - long mode: yes, windowLog is 27', result.stdout) # out of range cmd = [sys.executable, '-m', 'pyzstd', '--compress', os.path.join(self.samples_path, '1.dat'), '--long', str(CParameter.windowLog.bounds()[1] + 1), '--output', OUTPUT_FILE, '-f'] result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertRegex(result.stderr, rb'--long value should:') def test_dictID_range(self): OUTPUT_FILE = os.path.join(self.dir_name, 'dictid_range') cmd = [sys.executable, '-m', 'pyzstd', '--train', self.samples_path + os.sep + '*.dat', '-o', OUTPUT_FILE, '--dictID', str(0xFFFFFFFF+1)] result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertIn(b'--dictID value should:', result.stderr) if __name__ == "__main__": unittest.main() pyzstd-0.19.1/.gitignore0000644000000000000000000000012413615410400012044 0ustar00/env __pycache__ /build /dist *.egg-info /.eggs /.vscode /src/pyzstd/_version.py pyzstd-0.19.1/LICENSE0000644000000000000000000000300413615410400011061 0ustar00BSD 3-Clause License Copyright (c) 2020-present, Ma Lin and contributors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. pyzstd-0.19.1/README.md0000644000000000000000000000262313615410400011341 0ustar00
# pyzstd Python bindings to Zstandard (zstd) compression library [![GitHub build status](https://img.shields.io/github/actions/workflow/status/rogdham/pyzstd/build.yml?branch=master)](https://github.com/rogdham/pyzstd/actions?query=branch:master) [![Release on PyPI](https://img.shields.io/pypi/v/pyzstd)](https://pypi.org/project/pyzstd/) [![BSD-3-Clause License](https://img.shields.io/pypi/l/pyzstd)](https://github.com/Rogdham/pyzstd/blob/master/LICENSE.txt) --- [📖 Documentation][doc]   |   [📃 Changelog](./CHANGELOG.md)
--- The `pyzstd` module provides Python support for [Zstandard](http://www.zstd.net), using an API style similar to the `bz2`, `lzma`, and `zlib` modules. > [!WARNING] > > Zstandard is now natively supported in Python’s standard library via the > [`compression.zstd` module][compression.zstd]. For older Python versions, use the > [`backports.zstd` library][backports.zstd] as a fallback. > > We recommend new projects to use the standard library, and existing ones to consider > migrating. > > `pyzstd` internally uses `compression.zstd` since version 0.19.0. > > See [`pyzstd`'s documentation][doc] for details and a migration guide. [doc]: https://pyzstd.readthedocs.io/ [compression.zstd]: https://docs.python.org/3.14/library/compression.zstd.html [backports.zstd]: https://github.com/Rogdham/backports.zstd pyzstd-0.19.1/pyproject.toml0000644000000000000000000000601313615410400012773 0ustar00[project] dynamic = ["version"] name = "pyzstd" authors = [ { name = "Ma Lin", email = "malincns@163.com" }, { name = "Rogdham", email = "contact@rogdham.net" }, ] maintainers = [{ name = "Rogdham", email = "contact@rogdham.net" }] description = "Support for Zstandard (zstd) compression" readme = { file = "README.md", content-type = "text/markdown" } keywords = [ "zstandard", "zstd", "zst", "compress", "decompress", "tar", "file", "seek", "seekable", ] license = "BSD-3-Clause" license-files = ["LICENSE"] classifiers = [ "Development Status :: 5 - Production/Stable", "Operating System :: OS Independent", "Intended Audience :: Developers", "Topic :: System :: Archiving :: Compression", "License :: OSI Approved :: BSD License", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", ] requires-python = ">=3.10" dependencies = [ "backports.zstd>=1.0.0 ; python_version<'3.14'", "typing-extensions>=4.13.2 ; python_version<'3.13'", ] [project.urls] Homepage = "https://github.com/Rogdham/pyzstd" Documentation = "https://pyzstd.readthedocs.io/" Source = "https://github.com/Rogdham/pyzstd" # # build # [build-system] requires = ["hatchling", "hatch-vcs"] build-backend = "hatchling.build" [tool.hatch.build.hooks.vcs] template = "__version__ = \"{version}\"\n" version-file = "src/pyzstd/_version.py" [tool.hatch.version] source = "vcs" # # mypy # [tool.mypy] # Import discovery files = "src" ignore_missing_imports = false follow_imports = "normal" # Platform configuration python_version = "3.14" # Disallow dynamic typing disallow_any_unimported = true disallow_any_decorated = true disallow_any_generics = true disallow_subclassing_any = true # Untyped definitions and calls disallow_untyped_calls = true disallow_untyped_defs = true disallow_incomplete_defs = true check_untyped_defs = true disallow_untyped_decorators = true # None and Optional handling no_implicit_optional = true strict_optional = true # Configuring warning warn_redundant_casts = true warn_unused_ignores = true warn_no_return = true warn_return_any = true warn_unreachable = true # Suppressing errors ignore_errors = false # Miscellaneous strictness flags strict_equality = true # Configuring error messages show_error_context = true show_error_codes = true # Miscellaneous warn_unused_configs = true # # ruff # [tool.ruff] src = ["src"] target-version = "py310" extend-exclude = ["tests"] [tool.ruff.lint] select = ["ALL"] ignore = [ "C901", "COM812", "D", "E501", "EM", "ERA001", "FA100", "ISC001", "PLR0912", "PLR0913", "PLR0915", "PLR2004", "PTH", "TRY003", "TRY301", ] [tool.ruff.lint.per-file-ignores] "src/pyzstd/__main__.py" = ["PLC0415", "T201"] "docs/conf.py" = ["A001", "INP001"] [tool.ruff.lint.isort] force-sort-within-sections = true known-first-party = ["pyzstd"] pyzstd-0.19.1/PKG-INFO0000644000000000000000000000512713615410400011161 0ustar00Metadata-Version: 2.4 Name: pyzstd Version: 0.19.1 Summary: Support for Zstandard (zstd) compression Project-URL: Homepage, https://github.com/Rogdham/pyzstd Project-URL: Documentation, https://pyzstd.readthedocs.io/ Project-URL: Source, https://github.com/Rogdham/pyzstd Author-email: Ma Lin , Rogdham Maintainer-email: Rogdham License-Expression: BSD-3-Clause License-File: LICENSE Keywords: compress,decompress,file,seek,seekable,tar,zst,zstandard,zstd Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: 3.14 Classifier: Topic :: System :: Archiving :: Compression Requires-Python: >=3.10 Requires-Dist: backports-zstd>=1.0.0; python_version < '3.14' Requires-Dist: typing-extensions>=4.13.2; python_version < '3.13' Description-Content-Type: text/markdown
# pyzstd Python bindings to Zstandard (zstd) compression library [![GitHub build status](https://img.shields.io/github/actions/workflow/status/rogdham/pyzstd/build.yml?branch=master)](https://github.com/rogdham/pyzstd/actions?query=branch:master) [![Release on PyPI](https://img.shields.io/pypi/v/pyzstd)](https://pypi.org/project/pyzstd/) [![BSD-3-Clause License](https://img.shields.io/pypi/l/pyzstd)](https://github.com/Rogdham/pyzstd/blob/master/LICENSE.txt) --- [📖 Documentation][doc]   |   [📃 Changelog](./CHANGELOG.md)
--- The `pyzstd` module provides Python support for [Zstandard](http://www.zstd.net), using an API style similar to the `bz2`, `lzma`, and `zlib` modules. > [!WARNING] > > Zstandard is now natively supported in Python’s standard library via the > [`compression.zstd` module][compression.zstd]. For older Python versions, use the > [`backports.zstd` library][backports.zstd] as a fallback. > > We recommend new projects to use the standard library, and existing ones to consider > migrating. > > `pyzstd` internally uses `compression.zstd` since version 0.19.0. > > See [`pyzstd`'s documentation][doc] for details and a migration guide. [doc]: https://pyzstd.readthedocs.io/ [compression.zstd]: https://docs.python.org/3.14/library/compression.zstd.html [backports.zstd]: https://github.com/Rogdham/backports.zstd