././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3654969 borgstore-0.4.0/0000755000076500000240000000000015155516137012107 5ustar00twstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773575160.0 borgstore-0.4.0/CHANGES.rst0000644000076500000240000001114415155515770013714 0ustar00twstaffChangelog ========= Version 0.4.0 (2026-03-15) -------------------------- New features: - REST (http/https) backend, REST server, #18 Fixes: - fix permissions check, #139 - posixfs/sftp: do not raise if base_path can not be deleted, #133 - list: do not yield invalid names, #130 - posixfs, s3, sftp: URL-unquote, #129 Other changes: - add "rclone" and "rest" extras, "requests" is now an optional requirement Version 0.3.1 (2026-02-09) -------------------------- Bug fixes: - s3 URL: ensure s3 endpoint is optional Other changes: - add support for Python 3.14, remove 3.9 - backends: have separate exceptions for invalid URL and dependency missing - posixfs: better exception message if not absolute path - use SPDX license identifier, require a recent setuptools - CI: - add sftp store testing, #64 - add s3 store testing - docs: - describe the posixfs permissions system - updates, typos and grammar fixes - mention the permissions system of posixfs backend Version 0.3.0 2025-05-22 ------------------------ New features: - posixfs: add a permissions system, #105 - Store: add permissions argument (only supported by posixfs) - Store: add logging for Store ops, #104. It logs: - operation - name(s) - parameters such as deleted - size and timing Please note: - logging is done at DEBUG level, so log output is not visible with a default logger. - borgstore does not configure logging; that is the task of the application that uses borgstore. Version 0.2.0 2025-04-21 ------------------------ Breaking changes: - Store.list: changed deleted argument semantics, #83: - True: list ONLY soft-deleted items - False: list ONLY non-deleted items New features: - new s3/b2 backend that uses the boto3 library, #96 - posixfs/sftp: create missing parent directories of the base path - rclone: add a way to specify the path to the rclone binary for custom installations Bug fixes: - rclone: fix discard thread issues, #92 - rclone: check rclone regex before raising rclone-related exceptions Other changes: - posixfs: also support Windows file:/// URLs, #82 - posixfs / sftp: optimize mkdir usage, add retries, #85 - posixfs / sftp: change .precreate_dirs default to False - rclone init: use a random port instead of relying on rclone to pick one Version 0.1.0 2024-10-15 ------------------------ Breaking changes: - accepted store URLs: see README - Store: require complete levels configuration, #46 Other changes: - sftp/posixfs backends: remove ad hoc mkdir calls, #46 - optimize Sftp._mkdir, #80 - sftp backend is now optional, avoiding dependency issues on some platforms, #74. Use pip install "borgstore[sftp]" to install with the sftp backend. Version 0.0.5 2024-10-01 ------------------------ Fixes: - backend.create: only reject non-empty storage, #57 - backends.sftp: fix _mkdir edge case - backends.sftp: raise BackendDoesNotExist if base path is not found - rclone backend: - don't error on create if source directory is empty, #57 - fix hang on termination, #54 New features: - rclone backend: retry errors on load and store 3 times Other changes: - remove MStore for now, see commit 6a6fb334. - refactor Store tests, add Store.set_levels method - move types-requests to tox.ini, only needed for development Version 0.0.4 2024-09-22 ------------------------ - rclone: new backend to access any of the 100s of cloud backends that rclone supports; needs rclone >= v1.57.0. See the rclone docs for installing rclone and creating remotes. After that, borgstore will support URLs like: - rclone://remote: - rclone://remote:path - rclone:///tmp/testdir (local fs, for testing) - Store.list: give up trying to do anything with a directory's "size" - .info / .list: return st.st_size for a directory "as-is" - tests: BORGSTORE_TEST_RCLONE_URL to set rclone test URL - tests: allow BORGSTORE_TEST_*_URL in the testenv to make tox work for testing sftp, rclone, or other URLs. Version 0.0.3 2024-09-17 ------------------------ - sftp: add support for ~/.ssh/config, #37 - sftp: username is optional, #27 - load known_hosts, remove AutoAddPolicy, #39 - store: raise backend-specific exceptions, #34 - add Store.stats property, #25 - bandwidth emulation via BORGSTORE_BANDWIDTH [bit/s], #24 - latency emulation via BORGSTORE_LATENCY [us], #24 - fix demo code, also output stats - tests: BORGSTORE_TEST_SFTP_URL to set sftp test URL Version 0.0.2 2024-09-10 ------------------------ - sftp backend: use paramiko's client.posix_rename, #17 - posixfs backend: hack: accept file://relative/path, #23 - support and test on Python 3.13, #21 Version 0.0.1 2024-08-23 ------------------------ First PyPI release. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773360264.0 borgstore-0.4.0/LICENSE.rst0000644000076500000240000000271215154652210013715 0ustar00twstaffCopyright (C) 2026 Thomas Waldmann All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3652568 borgstore-0.4.0/PKG-INFO0000644000076500000240000003422415155516137013211 0ustar00twstaffMetadata-Version: 2.4 Name: borgstore Version: 0.4.0 Summary: key/value store Author-email: Thomas Waldmann License-Expression: BSD-3-Clause Project-URL: Homepage, https://github.com/borgbackup/borgstore Keywords: kv,key/value,store Classifier: Development Status :: 3 - Alpha Classifier: Intended Audience :: Developers Classifier: Operating System :: POSIX Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: 3.14 Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: Software Development :: Libraries :: Python Modules Requires-Python: >=3.10 Description-Content-Type: text/x-rst License-File: LICENSE.rst Provides-Extra: rest Requires-Dist: requests>=2.25.1; extra == "rest" Provides-Extra: rclone Requires-Dist: requests>=2.25.1; extra == "rclone" Provides-Extra: sftp Requires-Dist: paramiko>=1.9.1; extra == "sftp" Provides-Extra: s3 Requires-Dist: boto3; extra == "s3" Provides-Extra: none Dynamic: license-file BorgStore ========= A key/value store implementation in Python, supporting multiple backends. Keys ---- A key (str) can look like: - 0123456789abcdef... (usually a long, hex-encoded hash value) - Any other pure ASCII string without '/', '..', or spaces. Namespaces ---------- To keep things separate, keys should be prefixed with a namespace, such as: - config/settings - meta/0123456789abcdef... - data/0123456789abcdef... Please note: 1. You should always use namespaces. 2. Nested namespaces like namespace1/namespace2/key are not supported. 3. The code can work without a namespace (empty namespace ""), but then you can't add another namespace later, because that would create nested namespaces. Values ------ Values can be any arbitrary binary data (bytes). Store Operations ---------------- The high-level Store API implementation transparently deals with nesting and soft deletion, so the caller doesn't need to care much about that, and the backend API can be much simpler: - create/destroy: initialize or remove the whole store. - list: flat list of the items in the given namespace (by default, only non-deleted items; optionally, only soft-deleted items). - store: write a new item into the store (providing its key/value pair). - load: read a value from the store (given its key); partial loads specifying an offset and/or size are supported. - info: get information about an item via its key (exists, size, ...). - delete: immediately remove an item from the store (given its key). - move: implements renaming, soft delete/undelete, and moving to the current nesting level. - stats: API call counters, time spent in API methods, data volume/throughput. - latency/bandwidth emulator: can emulate higher latency (via BORGSTORE_LATENCY [us]) and lower bandwidth (via BORGSTORE_BANDWIDTH [bit/s]) than what is actually provided by the backend. Store operations (and per-op timing and volume) are logged at DEBUG log level. Automatic Nesting ----------------- For the Store user, items have names such as: - namespace/0123456789abcdef... - namespace/abcdef0123456789... If there are very many items in the namespace, this could lead to scalability issues in the backend. The Store implementation therefore offers transparent nesting, so that internally the backend API is called with names such as: - namespace/01/23/56/0123456789abcdef... - namespace/ab/cd/ef/abcdef0123456789... The nesting depth can be configured from 0 (= no nesting) to N levels and there can be different nesting configurations depending on the namespace. The Store supports operating at different nesting levels in the same namespace at the same time. When using nesting depth > 0, the backends assume that keys are hashes (contain hex digits) because some backends pre-create the nesting directories at initialization time to optimize backend performance. Soft deletion ------------- To soft-delete an item (so its value can still be read or it can be undeleted), the store just renames the item, appending ".del" to its name. Undelete reverses this by removing the ".del" suffix from the name. Some store operations provide a boolean flag "deleted" to control whether they consider soft-deleted items. Backends -------- The backend API is rather simple; one only needs to provide some very basic operations. Existing backends are listed below; more might come in the future. posixfs ~~~~~~~ Use storage on a local POSIX filesystem: - URL: ``file:///absolute/path`` - It is the caller's responsibility to convert a relative path into an absolute filesystem path. - Namespaces: directories - Values: in key-named files - Permissions: This backend can enforce a simple, test-friendly permission system and raises ``PermissionDenied`` if access is not permitted by the configuration. You provide a mapping of names (paths) to granted permission letters. Permissions apply to the exact name and all of its descendants (inheritance). If a name is not present in the mapping, its nearest ancestor is consulted, up to the empty name "" (the store root). If no mapping is provided at all, all operations are allowed. Permission letters: - ``l``: allow listing object names (directory/namespace listing) - ``r``: allow reading objects (contents) - ``w``: allow writing new objects (must not already exist) - ``W``: allow writing objects including overwriting existing objects - ``D``: allow deleting objects Operation requirements: - create(): requires ``w`` or ``W`` on the store root (``wW``) - destroy(): requires ``D`` on the store root - mkdir(name): requires ``w`` - rmdir(name): requires ``w`` or ``D`` (``wD``) - list(name): requires ``l`` - info(name): requires ``l`` (``r`` also accepted) - load(name): requires ``r`` - store(name, value): requires ``w`` for new objects, ``W`` for overwrites (``wW``) - delete(name): requires ``D`` - move(src, dst): requires ``D`` for the source and ``w``/``W`` for the destination Examples: - Read-only store (recursively): ``permissions = {"": "lr"}`` - No-delete, no-overwrite (but allow adding new items): ``permissions = {"": "lrw"}`` - Hierarchical rules: only allow listing at root, allow read/write in "dir", but only read for "dir/file": :: permissions = { "": "l", "dir": "lrw", "dir/file": "r", } To use permissions with ``Store`` and ``posixfs``, pass the mapping to Store and it will be handed to the posixfs backend: :: from borgstore import Store store = Store(url="file:///abs/path", permissions={"": "lrwWD"}) store.create() store.open() # ... store.close() sftp ~~~~ Use storage on an SFTP server: - URL: ``sftp://user@server:port/relative/path`` (strongly recommended) For users' and admins' convenience, the mapping of the URL path to the server filesystem path depends on the server configuration (home directory, sshd/sftpd config, ...). Usually the path is relative to the user's home directory. - URL: ``sftp://user@server:port//absolute/path`` As this uses an absolute path, some things become more difficult: - A user's configuration might break if a server admin moves a user's home to a new location. - Users must know the full absolute path of the space they are permitted to use. - Namespaces: directories - Values: in key-named files rclone ~~~~~~ Use storage on any of the many cloud providers `rclone `_ supports: - URL: ``rclone:remote:path`` — we just prefix "rclone:" and pass everything to the right of that to rclone; see: https://rclone.org/docs/#syntax-of-remote-paths - The implementation primarily depends on the specific remote. - The rclone binary path can be set via the environment variable ``RCLONE_BINARY`` (default: "rclone"). s3 ~~ Use storage on an S3-compliant cloud service: - URL: ``(s3|b2):[profile|(access_key_id:access_key_secret)@][scheme://hostname[:port]]/bucket/path`` The underlying backend is based on ``boto3``, so all standard boto3 authentication methods are supported: - provide a named profile (from your boto3 config), - include access key ID and secret in the URL, - or use default credentials (e.g., environment variables, IAM roles, etc.). See the `boto3 credentials documentation `_ for more details. If you're connecting to **AWS S3**, the ``[schema://hostname[:port]]`` part is optional. Bucket and path are always required. .. note:: There is a known issue with some S3-compatible services (e.g., **Backblaze B2**). If you encounter problems, try using ``b2:`` instead of ``s3:`` in the URL. - Namespaces: directories - Values: in key-named files REST (http/https) ~~~~~~~~~~~~~~~~~ Use storage on a BorgStore REST server: - URL: ``http[s]://[user:password@]host:port/`` - Namespaces: depends on backend used by the server - Values: depends on backend used by the server - Authentication: Optional Basic Auth is supported. REST Server ----------- BorgStore includes a simple REST server that can be used to provide remote access to any BorgStore backend. Running the server ~~~~~~~~~~~~~~~~~~ Run a server with a file: backend (for a local directory), using HTTP Basic Authentication:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore Accessing the server from a client ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The borgstore REST client can then access via:: http://user:pass@127.0.0.1:5618/ Permissions ~~~~~~~~~~~ The REST server, when used with the ``posixfs`` backend, supports the same permissions system as that backend (see above). If ``--permissions`` is omitted, all operations are allowed. To restrict permissions, pass a JSON-encoded permissions mapping via ``--permissions``. Examples: Read-only access:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lr"}' No-delete, no-overwrite (allow adding new items):: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lrw"}' Full access:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lrwWD"}' BorgBackup shortcuts ^^^^^^^^^^^^^^^^^^^^ Instead of hand-crafting a JSON mapping, you can use a named shortcut tailored for `BorgBackup `_ repositories: ``borgbackup-all`` No permission restrictions — all operations are allowed (equivalent to omitting ``--permissions``). ``borgbackup-no-delete`` Prevent deletion and overwriting of existing objects; new objects may still be added. ``borgbackup-write-only`` Clients may store new data but cannot read existing data back (except for caches and metadata that borg needs internally). ``borgbackup-read-only`` Clients may only list and read objects. Example — restrict a backup server to no-delete access: .. code-block:: bash python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \\ --username user --password pass \\ --backend file:///home/user/repos/repo1 \\ --permissions borgbackup-no-delete Custom JSON permissions ^^^^^^^^^^^^^^^^^^^^^^^ You can also pass an arbitrary JSON-encoded permissions mapping directly. Hierarchical rules (list-only at root, read/write in ``data/``):: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "l", "data": "lrw"}' Scalability ----------- - Count of key/value pairs stored in a namespace: automatic nesting is provided for keys to address common scalability issues. - Key size: there are no special provisions for extremely long keys (e.g., exceeding backend limitations). Usually this is not a problem, though. - Value size: there are no special provisions for dealing with large value sizes (e.g., more than available memory, more than backend storage limitations, etc.). If one deals with very large values, one usually cuts them into chunks before storing them in the store. - Partial loads improve performance by avoiding a full load if only part of the value is needed (e.g., a header with metadata). Installation ------------ Install without the extras: pip install borgstore pip install "borgstore[none]" # same thing (simplifies automation) Install with the ``rest:`` backend (more dependencies):: pip install "borgstore[rest]" Install with the ``sftp:`` backend (more dependencies):: pip install "borgstore[sftp]" Install with the ``s3:`` backend (more dependencies):: pip install "borgstore[s3]" Install with the ``rclone:`` backend (more dependencies):: pip install "borgstore[rclone]" Please note that ``rclone:`` also supports SFTP and S3 remotes. Want a demo? ------------ Run this to get instructions on how to run the demo:: python3 -m borgstore State of this project --------------------- **API is still unstable and expected to change as development goes on.** **As long as the API is unstable, there will be no data migration tools, such as tools for upgrading an existing store's data to a new release.** There are tests, and they pass for the basic functionality, so some functionality is already working well. There might be missing features or optimization potential. Feedback is welcome! Many possible backends are still missing. If you want to create and support one, pull requests are welcome. Borg? ----- Please note that this code is currently **not** used by the stable release of BorgBackup (also known as "borg"), but only by Borg 2 beta 10+ and the master branch. License ------- BSD license. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773550102.0 borgstore-0.4.0/README.rst0000644000076500000240000003165715155435026013607 0ustar00twstaffBorgStore ========= A key/value store implementation in Python, supporting multiple backends. Keys ---- A key (str) can look like: - 0123456789abcdef... (usually a long, hex-encoded hash value) - Any other pure ASCII string without '/', '..', or spaces. Namespaces ---------- To keep things separate, keys should be prefixed with a namespace, such as: - config/settings - meta/0123456789abcdef... - data/0123456789abcdef... Please note: 1. You should always use namespaces. 2. Nested namespaces like namespace1/namespace2/key are not supported. 3. The code can work without a namespace (empty namespace ""), but then you can't add another namespace later, because that would create nested namespaces. Values ------ Values can be any arbitrary binary data (bytes). Store Operations ---------------- The high-level Store API implementation transparently deals with nesting and soft deletion, so the caller doesn't need to care much about that, and the backend API can be much simpler: - create/destroy: initialize or remove the whole store. - list: flat list of the items in the given namespace (by default, only non-deleted items; optionally, only soft-deleted items). - store: write a new item into the store (providing its key/value pair). - load: read a value from the store (given its key); partial loads specifying an offset and/or size are supported. - info: get information about an item via its key (exists, size, ...). - delete: immediately remove an item from the store (given its key). - move: implements renaming, soft delete/undelete, and moving to the current nesting level. - stats: API call counters, time spent in API methods, data volume/throughput. - latency/bandwidth emulator: can emulate higher latency (via BORGSTORE_LATENCY [us]) and lower bandwidth (via BORGSTORE_BANDWIDTH [bit/s]) than what is actually provided by the backend. Store operations (and per-op timing and volume) are logged at DEBUG log level. Automatic Nesting ----------------- For the Store user, items have names such as: - namespace/0123456789abcdef... - namespace/abcdef0123456789... If there are very many items in the namespace, this could lead to scalability issues in the backend. The Store implementation therefore offers transparent nesting, so that internally the backend API is called with names such as: - namespace/01/23/56/0123456789abcdef... - namespace/ab/cd/ef/abcdef0123456789... The nesting depth can be configured from 0 (= no nesting) to N levels and there can be different nesting configurations depending on the namespace. The Store supports operating at different nesting levels in the same namespace at the same time. When using nesting depth > 0, the backends assume that keys are hashes (contain hex digits) because some backends pre-create the nesting directories at initialization time to optimize backend performance. Soft deletion ------------- To soft-delete an item (so its value can still be read or it can be undeleted), the store just renames the item, appending ".del" to its name. Undelete reverses this by removing the ".del" suffix from the name. Some store operations provide a boolean flag "deleted" to control whether they consider soft-deleted items. Backends -------- The backend API is rather simple; one only needs to provide some very basic operations. Existing backends are listed below; more might come in the future. posixfs ~~~~~~~ Use storage on a local POSIX filesystem: - URL: ``file:///absolute/path`` - It is the caller's responsibility to convert a relative path into an absolute filesystem path. - Namespaces: directories - Values: in key-named files - Permissions: This backend can enforce a simple, test-friendly permission system and raises ``PermissionDenied`` if access is not permitted by the configuration. You provide a mapping of names (paths) to granted permission letters. Permissions apply to the exact name and all of its descendants (inheritance). If a name is not present in the mapping, its nearest ancestor is consulted, up to the empty name "" (the store root). If no mapping is provided at all, all operations are allowed. Permission letters: - ``l``: allow listing object names (directory/namespace listing) - ``r``: allow reading objects (contents) - ``w``: allow writing new objects (must not already exist) - ``W``: allow writing objects including overwriting existing objects - ``D``: allow deleting objects Operation requirements: - create(): requires ``w`` or ``W`` on the store root (``wW``) - destroy(): requires ``D`` on the store root - mkdir(name): requires ``w`` - rmdir(name): requires ``w`` or ``D`` (``wD``) - list(name): requires ``l`` - info(name): requires ``l`` (``r`` also accepted) - load(name): requires ``r`` - store(name, value): requires ``w`` for new objects, ``W`` for overwrites (``wW``) - delete(name): requires ``D`` - move(src, dst): requires ``D`` for the source and ``w``/``W`` for the destination Examples: - Read-only store (recursively): ``permissions = {"": "lr"}`` - No-delete, no-overwrite (but allow adding new items): ``permissions = {"": "lrw"}`` - Hierarchical rules: only allow listing at root, allow read/write in "dir", but only read for "dir/file": :: permissions = { "": "l", "dir": "lrw", "dir/file": "r", } To use permissions with ``Store`` and ``posixfs``, pass the mapping to Store and it will be handed to the posixfs backend: :: from borgstore import Store store = Store(url="file:///abs/path", permissions={"": "lrwWD"}) store.create() store.open() # ... store.close() sftp ~~~~ Use storage on an SFTP server: - URL: ``sftp://user@server:port/relative/path`` (strongly recommended) For users' and admins' convenience, the mapping of the URL path to the server filesystem path depends on the server configuration (home directory, sshd/sftpd config, ...). Usually the path is relative to the user's home directory. - URL: ``sftp://user@server:port//absolute/path`` As this uses an absolute path, some things become more difficult: - A user's configuration might break if a server admin moves a user's home to a new location. - Users must know the full absolute path of the space they are permitted to use. - Namespaces: directories - Values: in key-named files rclone ~~~~~~ Use storage on any of the many cloud providers `rclone `_ supports: - URL: ``rclone:remote:path`` — we just prefix "rclone:" and pass everything to the right of that to rclone; see: https://rclone.org/docs/#syntax-of-remote-paths - The implementation primarily depends on the specific remote. - The rclone binary path can be set via the environment variable ``RCLONE_BINARY`` (default: "rclone"). s3 ~~ Use storage on an S3-compliant cloud service: - URL: ``(s3|b2):[profile|(access_key_id:access_key_secret)@][scheme://hostname[:port]]/bucket/path`` The underlying backend is based on ``boto3``, so all standard boto3 authentication methods are supported: - provide a named profile (from your boto3 config), - include access key ID and secret in the URL, - or use default credentials (e.g., environment variables, IAM roles, etc.). See the `boto3 credentials documentation `_ for more details. If you're connecting to **AWS S3**, the ``[schema://hostname[:port]]`` part is optional. Bucket and path are always required. .. note:: There is a known issue with some S3-compatible services (e.g., **Backblaze B2**). If you encounter problems, try using ``b2:`` instead of ``s3:`` in the URL. - Namespaces: directories - Values: in key-named files REST (http/https) ~~~~~~~~~~~~~~~~~ Use storage on a BorgStore REST server: - URL: ``http[s]://[user:password@]host:port/`` - Namespaces: depends on backend used by the server - Values: depends on backend used by the server - Authentication: Optional Basic Auth is supported. REST Server ----------- BorgStore includes a simple REST server that can be used to provide remote access to any BorgStore backend. Running the server ~~~~~~~~~~~~~~~~~~ Run a server with a file: backend (for a local directory), using HTTP Basic Authentication:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore Accessing the server from a client ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The borgstore REST client can then access via:: http://user:pass@127.0.0.1:5618/ Permissions ~~~~~~~~~~~ The REST server, when used with the ``posixfs`` backend, supports the same permissions system as that backend (see above). If ``--permissions`` is omitted, all operations are allowed. To restrict permissions, pass a JSON-encoded permissions mapping via ``--permissions``. Examples: Read-only access:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lr"}' No-delete, no-overwrite (allow adding new items):: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lrw"}' Full access:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lrwWD"}' BorgBackup shortcuts ^^^^^^^^^^^^^^^^^^^^ Instead of hand-crafting a JSON mapping, you can use a named shortcut tailored for `BorgBackup `_ repositories: ``borgbackup-all`` No permission restrictions — all operations are allowed (equivalent to omitting ``--permissions``). ``borgbackup-no-delete`` Prevent deletion and overwriting of existing objects; new objects may still be added. ``borgbackup-write-only`` Clients may store new data but cannot read existing data back (except for caches and metadata that borg needs internally). ``borgbackup-read-only`` Clients may only list and read objects. Example — restrict a backup server to no-delete access: .. code-block:: bash python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \\ --username user --password pass \\ --backend file:///home/user/repos/repo1 \\ --permissions borgbackup-no-delete Custom JSON permissions ^^^^^^^^^^^^^^^^^^^^^^^ You can also pass an arbitrary JSON-encoded permissions mapping directly. Hierarchical rules (list-only at root, read/write in ``data/``):: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "l", "data": "lrw"}' Scalability ----------- - Count of key/value pairs stored in a namespace: automatic nesting is provided for keys to address common scalability issues. - Key size: there are no special provisions for extremely long keys (e.g., exceeding backend limitations). Usually this is not a problem, though. - Value size: there are no special provisions for dealing with large value sizes (e.g., more than available memory, more than backend storage limitations, etc.). If one deals with very large values, one usually cuts them into chunks before storing them in the store. - Partial loads improve performance by avoiding a full load if only part of the value is needed (e.g., a header with metadata). Installation ------------ Install without the extras: pip install borgstore pip install "borgstore[none]" # same thing (simplifies automation) Install with the ``rest:`` backend (more dependencies):: pip install "borgstore[rest]" Install with the ``sftp:`` backend (more dependencies):: pip install "borgstore[sftp]" Install with the ``s3:`` backend (more dependencies):: pip install "borgstore[s3]" Install with the ``rclone:`` backend (more dependencies):: pip install "borgstore[rclone]" Please note that ``rclone:`` also supports SFTP and S3 remotes. Want a demo? ------------ Run this to get instructions on how to run the demo:: python3 -m borgstore State of this project --------------------- **API is still unstable and expected to change as development goes on.** **As long as the API is unstable, there will be no data migration tools, such as tools for upgrading an existing store's data to a new release.** There are tests, and they pass for the basic functionality, so some functionality is already working well. There might be missing features or optimization potential. Feedback is welcome! Many possible backends are still missing. If you want to create and support one, pull requests are welcome. Borg? ----- Please note that this code is currently **not** used by the stable release of BorgBackup (also known as "borg"), but only by Borg 2 beta 10+ and the master branch. License ------- BSD license. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773550102.0 borgstore-0.4.0/pyproject.toml0000644000076500000240000000416615155435026015027 0ustar00twstaff[project] name = "borgstore" dynamic = ["version"] authors = [{name="Thomas Waldmann", email="tw@waldmann-edv.de"}, ] description = "key/value store" readme = "README.rst" keywords = ["kv", "key/value", "store"] classifiers = [ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Operating System :: POSIX", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Software Development :: Libraries", "Topic :: Software Development :: Libraries :: Python Modules", ] license = "BSD-3-Clause" license-files = ["LICENSE.rst"] requires-python = ">=3.10" dependencies = [ ] [project.optional-dependencies] rest = [ "requests >= 2.25.1", ] rclone = [ "requests >= 2.25.1", ] sftp = [ "paramiko >= 1.9.1", # 1.9.1+ supports multiple IdentityKey entries in .ssh/config ] s3 = [ "boto3", ] none = [] [project.urls] Homepage = "https://github.com/borgbackup/borgstore" [build-system] requires = ["setuptools>=78.1.1", "setuptools_scm[toml]>=6.2"] build-backend = "setuptools.build_meta" [tool.setuptools_scm] # make sure we have the same versioning scheme with all setuptools_scm versions, to avoid different autogenerated files # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1015052 # https://github.com/borgbackup/borg/issues/6875 write_to = "src/borgstore/_version.py" write_to_template = "__version__ = version = {version!r}\n" [tool.black] line-length = 120 skip-magic-trailing-comma = true target-version = ['py310'] [tool.pytest.ini_options] minversion = "6.0" testpaths = ["tests"] [tool.flake8] # Ignoring E203 due to https://github.com/PyCQA/pycodestyle/issues/373 ignore = ['E226', 'W503', 'E203'] max_line_length = 120 exclude = ['build', 'dist', '.git', '.idea', '.mypy_cache', '.tox'] [tool.mypy] python_version = '3.10' strict_optional = false local_partial_types = true show_error_codes = true files = 'src/borgstore/**/*.py' ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3655365 borgstore-0.4.0/setup.cfg0000644000076500000240000000004615155516137013730 0ustar00twstaff[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3589375 borgstore-0.4.0/src/0000755000076500000240000000000015155516137012676 5ustar00twstaff././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3609705 borgstore-0.4.0/src/borgstore/0000755000076500000240000000000015155516137014704 5ustar00twstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1757407943.0 borgstore-0.4.0/src/borgstore/__init__.py0000644000076500000240000000013215057765307017017 0ustar00twstaff""" BorgStore: a key/value store. """ from ._version import __version__, version # noqa ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773532488.0 borgstore-0.4.0/src/borgstore/__main__.py0000644000076500000240000000453315155372510016776 0ustar00twstaff""" Demo for BorgStore ================== Usage: python -m borgstore For example: python -m borgstore file:///tmp/borgstore_storage Please be careful: the given storage will be created, used, and **completely deleted**! """ def run_demo(storage_url): from .store import Store def id_key(data: bytes): from hashlib import new h = new("sha256", data) return f"data/{h.hexdigest()}" levels_config = { "config/": [0], # no nesting needed/wanted for the configs "data/": [2], # 2 nesting levels wanted for the data } store = Store(url=storage_url, levels=levels_config) try: store.create() except FileExistsError: # Currently, we only have file:// storages, so this should be fine. print("Error: do not specify an existing directory.") return with store: print("Writing 2 items to config namespace...") settings1_key = "config/settings1" store.store(settings1_key, b"value1 = 42") settings2_key = "config/settings2" store.store(settings2_key, b"value2 = 23") print(f"Listing config namespace contents: {list(store.list('config'))}") settings1_value = store.load(settings1_key) print(f"Loaded from store: {settings1_key}: {settings1_value.decode()}") settings2_value = store.load(settings2_key) print(f"Loaded from store: {settings2_key}: {settings2_value.decode()}") print("Writing 2 items to data namespace...") data1 = b"some arbitrary binary data." key1 = id_key(data1) store.store(key1, data1) data2 = b"more arbitrary binary data. " * 2 key2 = id_key(data2) store.store(key2, data2) print(f"Soft-deleting item {key2} ...") store.move(key2, delete=True) print(f"Listing data namespace contents: {list(store.list('data', deleted=False))}") print(f"Listing data namespace contents (only deleted): {list(store.list('data', deleted=True))}") print(f"Stats: {store.stats}") answer = input("After you've inspected the storage, enter DESTROY to destroy the storage; anything else aborts: ") if answer == "DESTROY": store.destroy() if __name__ == "__main__": import sys if len(sys.argv) == 2: run_demo(sys.argv[1]) else: print(__doc__) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773575263.0 borgstore-0.4.0/src/borgstore/_version.py0000644000076500000240000000004015155516137017074 0ustar00twstaff__version__ = version = '0.4.0' ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3635828 borgstore-0.4.0/src/borgstore/backends/0000755000076500000240000000000015155516137016456 5ustar00twstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1757407943.0 borgstore-0.4.0/src/borgstore/backends/__init__.py0000644000076500000240000000013615057765307020575 0ustar00twstaff""" Package containing backend implementations. See borgstore.backends._base for details. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773360264.0 borgstore-0.4.0/src/borgstore/backends/_base.py0000644000076500000240000001136715154652210020101 0ustar00twstaff""" Base class and type definitions for all backend implementations in this package. Docs that are not backend-specific are also found here. """ from abc import ABC, abstractmethod from collections import namedtuple from typing import Iterator from ..constants import MAX_NAME_LENGTH, TMP_SUFFIX ItemInfo = namedtuple("ItemInfo", "name exists size directory") def validate_name(name): """Validate a backend key/name.""" # this is used before an object is accepted for storage and # it is also used before a name is returned by list method. # no crap in, no crap out (even if it is not from us). if not isinstance(name, str): raise TypeError(f"name must be str, but got: {type(name)}") # name must not be too long if len(name) > MAX_NAME_LENGTH: raise ValueError(f"name is too long (max: {MAX_NAME_LENGTH}): {name}") # avoid encoding issues try: name.encode("ascii") except UnicodeEncodeError: raise ValueError(f"name must encode to plain ascii, but failed with: {name}") # security: name must be relative - can be foo or foo/bar/baz, but must never be /foo or ../foo if name.startswith("/") or name.endswith("/") or ".." in name: raise ValueError(f"name must be relative and not contain '..': {name}") # names used here always have '/' as separator, never '\' - # this is to avoid confusion in case this is ported to e.g. Windows. # also: no blanks - simplifies usage via CLI / shell. if "\\" in name or " " in name: raise ValueError(f"name must not contain backslashes or blanks: {name}") # name must be lowercase - this is to avoid troubles in case this is ported to a non-case-sensitive backend. # also, guess we want to avoid that a key "config" would address a different item than a key "CONFIG" or # a key "1234CAFE5678BABE" would address a different item than a key "1234cafe5678babe". if name != name.lower(): raise ValueError(f"name must be lowercase, but got: {name}") if name.endswith(TMP_SUFFIX): # TMP_SUFFIX is used for temporary files internally, e.g. while files are uploading. raise ValueError(f"name must not end with {TMP_SUFFIX}, but got: {name}") class BackendBase(ABC): # a backend can request all directories to be pre-created once at backend creation (initialization) time. # for some backends this will optimize the performance of store and move operation, because they won't # have to care for ad-hoc directory creation for every store or move call. of course, create will take # significantly longer, especially if nesting on levels > 1 is used. # otoh, for some backends this might be completely pointless, e.g. if mkdir is a NOP (is ignored). # for the unit tests, precreate_dirs should be set to False, otherwise they get slowed down too much. # for interactive usage, precreate_dirs = False is often the less annoying, quicker option. # code in .store and .move methods can deal with mkdir in the exception handler, after first just # assuming that the directory is usually already there. precreate_dirs: bool = False @abstractmethod def create(self): """create (initialize) a backend storage""" @abstractmethod def destroy(self): """completely remove the backend storage (and its contents)""" def __enter__(self): self.open() return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() return False @abstractmethod def open(self): """open (start using) a backend storage""" @abstractmethod def close(self): """close (stop using) a backend storage""" @abstractmethod def mkdir(self, name: str) -> None: """create directory/namespace """ @abstractmethod def rmdir(self, name: str) -> None: """remove directory/namespace """ @abstractmethod def info(self, name) -> ItemInfo: """return information about """ @abstractmethod def load(self, name: str, *, size=None, offset=0) -> bytes: """load value from """ @abstractmethod def store(self, name: str, value: bytes) -> None: """store into """ @abstractmethod def delete(self, name: str) -> None: """delete """ @abstractmethod def move(self, curr_name: str, new_name: str) -> None: """rename curr_name to new_name (overwrite target)""" @abstractmethod def list(self, name: str) -> Iterator[ItemInfo]: """list the contents of , non-recursively. Does not yield TMP_SUFFIX items - usually they are either not finished uploading or they are leftover crap from aborted uploads. The yielded ItemInfos are sorted alphabetically by name. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1757407943.0 borgstore-0.4.0/src/borgstore/backends/errors.py0000644000076500000240000000156315057765307020357 0ustar00twstaff""" Generic exception classes used by all backends. """ class BackendError(Exception): """Base class for exceptions in this module.""" class BackendURLInvalid(BackendError): """Raised when trying to create a store using an invalid backend URL.""" class NoBackendGiven(BackendError): """Raised when trying to create a store and giving neither a backend nor a URL.""" class BackendAlreadyExists(BackendError): """Raised when a backend already exists.""" class BackendDoesNotExist(BackendError): """Raised when a backend does not exist.""" class BackendMustNotBeOpen(BackendError): """Backend must not be open.""" class BackendMustBeOpen(BackendError): """Backend must be open.""" class ObjectNotFound(BackendError): """Object not found.""" class PermissionDenied(BackendError): """Permission denied for the requested operation.""" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773549086.0 borgstore-0.4.0/src/borgstore/backends/posixfs.py0000644000076500000240000003061115155433036020520 0ustar00twstaff""" Filesystem-based backend implementation - uses files in directories below a base path. """ import os import re import sys from urllib.parse import unquote from pathlib import Path import shutil import stat import tempfile from ._base import BackendBase, ItemInfo, validate_name from .errors import BackendError, BackendAlreadyExists, BackendDoesNotExist, BackendMustNotBeOpen, BackendMustBeOpen from .errors import ObjectNotFound, PermissionDenied from ..constants import TMP_SUFFIX def get_file_backend(url, permissions=None): # file:///absolute/path # notes: # - we only support **local** fs **absolute** paths. # - there is no such thing as a "relative path" local fs file: URL # - the general URL syntax is proto://host/path # - // introduces the host part. it is empty here, meaning localhost / local fs. # - the third slash is NOT optional, it is the start of an absolute path as well # as the separator between the host and the path part. # - the caller is responsible to give an absolute path. # - Windows: see: https://en.wikipedia.org/wiki/File_URI_scheme windows_file_regex = r""" file:// # only empty host part is supported. / # 3rd slash is separator ONLY, not part of the path. (?P([a-zA-Z]:/.*)) # path must be an absolute path. """ file_regex = r""" file:// # only empty host part is supported. (?P(/.*)) # path must be an absolute path. 3rd slash is separator AND part of the path. """ # the path or drive_and_path could be URL-quoted and thus must be URL-unquoted if sys.platform in ("win32", "msys", "cygwin"): m = re.match(windows_file_regex, url, re.VERBOSE) if m: return PosixFS(path=unquote(m["drive_and_path"]), permissions=permissions) m = re.match(file_regex, url, re.VERBOSE) if m: return PosixFS(path=unquote(m["path"]), permissions=permissions) class PosixFS(BackendBase): # PosixFS implementation supports precreate = True as well as = False. precreate_dirs: bool = False def __init__(self, path, *, do_fsync=False, permissions=None): self.base_path = Path(path) if not self.base_path.is_absolute(): raise BackendError(f"path must be an absolute path: {path}") self.opened = False self.do_fsync = do_fsync # False = 26x faster, see #10 self.permissions = permissions or {} # name [str] -> granted_permissions [str] def _check_permission(self, name, required_permissions): """ Check in the self.permissions mapping if one of the required_permissions is granted for the given name or its parents. Permission characters: - l: allow listing object names ("namespace/directory listing") - r: allow reading objects (contents) - w: allow writing NEW objects (must not already exist) - W: allow writing objects (also overwrite existing objects) - D: allow deleting objects Move requires "D" (src) and "wW" (dst). Moves are used by the Store for soft-deletion/undeletion, level changes and generic renames. If permissions are granted for a directory like "foo", they also apply to objects below that directory, like "foo/bar". """ assert set(required_permissions).issubset("lrwWD") if not self.permissions: # If no permissions dict is provided, allow all operations. return # Check permissions, starting from full name (full path) going up to the root. path_parts = name.split("/") for i in range(len(path_parts), -1, -1): # i: LEN .. 0 path = "/".join(path_parts[:i]) # path: full path .. root if path in self.permissions: granted_permissions = self.permissions[path] # Check if any of the required permissions is present. if set(required_permissions) & set(granted_permissions): return # Permission granted # If path was found in permissions but didn't have required permission, we stop here # (more specific longer-path entry takes precedence over shorter-path entry). break # If we get here, none of the required permissions was found raise PermissionDenied(f"One of permissions '{required_permissions}' required for '{name}'") def create(self): if self.opened: raise BackendMustNotBeOpen() self._check_permission("", "wW") # we accept an already existing empty directory and we also optionally create # any missing parent dirs. the latter is important for repository hosters that # only offer limited access to their storage (e.g. only via borg/borgstore). # also, it is simpler than requiring users to create parent dirs separately. self.base_path.mkdir(exist_ok=True, parents=True) # avoid that users create a mess by using non-empty directories: contents = list(self.base_path.iterdir()) if contents: raise BackendAlreadyExists(f"posixfs storage base path is not empty: {self.base_path}") def destroy(self): if self.opened: raise BackendMustNotBeOpen() self._check_permission("", "D") if not self.base_path.exists(): raise BackendDoesNotExist(f"posixfs storage base path does not exist: {self.base_path}") def onexc(func, path, exc): # for rmtree, this is called if it can't remove a file or directory. # usually, this is because of missing permissions. if path != os.fspath(self.base_path): raise exc # do not raise if we can't remove the base path directory. # .create accepts an already existing base path, thus # .destroy may leave an existing base path behind. def onerror(func, path, excinfo): onexc(func, path, excinfo[1]) kw = {"onexc": onexc} if sys.version_info >= (3, 12) else {"onerror": onerror} shutil.rmtree(os.fspath(self.base_path), **kw) def open(self): if self.opened: raise BackendMustNotBeOpen() if not self.base_path.is_dir(): raise BackendDoesNotExist( f"posixfs storage base path does not exist or is not a directory: {self.base_path}" ) self.opened = True def close(self): if not self.opened: raise BackendMustBeOpen() self.opened = False def _validate_join(self, name): validate_name(name) return self.base_path / name def mkdir(self, name): if not self.opened: raise BackendMustBeOpen() path = self._validate_join(name) # spamming a store with lots of random empty dirs == DoS, thus require w. self._check_permission(name, "w") path.mkdir(parents=True, exist_ok=True) def rmdir(self, name): if not self.opened: raise BackendMustBeOpen() path = self._validate_join(name) # path.rmdir only removes empty directories, thus no data can be lost. # thus, a granted "w" is already good enough, "D" is also ok. self._check_permission(name, "wD") try: path.rmdir() except FileNotFoundError: raise ObjectNotFound(name) from None def info(self, name): if not self.opened: raise BackendMustBeOpen() path = self._validate_join(name) # we do not read object content, so a granted "l" is enough, "r" is also ok. self._check_permission(name, "lr") try: st = path.stat() except FileNotFoundError: return ItemInfo(name=path.name, exists=False, directory=False, size=0) else: is_dir = stat.S_ISDIR(st.st_mode) return ItemInfo(name=path.name, exists=True, directory=is_dir, size=st.st_size) def load(self, name, *, size=None, offset=0): if not self.opened: raise BackendMustBeOpen() path = self._validate_join(name) self._check_permission(name, "r") try: with path.open("rb") as f: if offset > 0: f.seek(offset) return f.read(-1 if size is None else size) except FileNotFoundError: raise ObjectNotFound(name) from None def store(self, name, value): def _write_to_tmpfile(): with tempfile.NamedTemporaryFile(suffix=TMP_SUFFIX, dir=tmp_dir, delete=False) as f: f.write(value) if self.do_fsync: f.flush() os.fsync(f.fileno()) tmp_path = Path(f.name) return tmp_path if not self.opened: raise BackendMustBeOpen() path = self._validate_join(name) self._check_permission(name, "W" if path.exists() else "wW") tmp_dir = path.parent # write to a differently named temp file in same directory first, # so the store never sees partially written data. try: # try to do it quickly, not doing the mkdir. fs ops might be slow, esp. on network fs (latency). # this will frequently succeed, because the dir is already there. tmp_path = _write_to_tmpfile() except FileNotFoundError: # retry, create potentially missing dirs first. this covers these cases: # - either the dirs were not precreated # - a previously existing directory was "lost" in the filesystem tmp_dir.mkdir(parents=True, exist_ok=True) tmp_path = _write_to_tmpfile() # all written and synced to disk, rename it to the final name: try: tmp_path.replace(path) except OSError: tmp_path.unlink() raise def delete(self, name): if not self.opened: raise BackendMustBeOpen() path = self._validate_join(name) self._check_permission(name, "D") try: path.unlink() except FileNotFoundError: raise ObjectNotFound(name) from None def move(self, curr_name, new_name): def _rename_to_new_name(): curr_path.rename(new_path) if not self.opened: raise BackendMustBeOpen() curr_path = self._validate_join(curr_name) new_path = self._validate_join(new_name) # random moves could do a lot of harm in the store: # not finding an object anymore is similar to having it deleted. # also, the source object vanishes under its original name, thus we want D for the source. # as the move might replace the destination, we want W or wW for the destination. # move is also used for soft-deletion by the Store, that also hints to using D for the source. self._check_permission(curr_name, "D") self._check_permission(new_name, "W" if new_path.exists() else "wW") try: # try to do it quickly, not doing the mkdir. fs ops might be slow, esp. on network fs (latency). # this will frequently succeed, because the dir is already there. _rename_to_new_name() except FileNotFoundError: # retry, create potentially missing dirs first. this covers these cases: # - either the dirs were not precreated # - a previously existing directory was "lost" in the filesystem new_path.parent.mkdir(parents=True, exist_ok=True) try: _rename_to_new_name() except FileNotFoundError: raise ObjectNotFound(curr_name) from None def list(self, name): if not self.opened: raise BackendMustBeOpen() path = self._validate_join(name) self._check_permission(name, "l") try: paths = sorted(path.iterdir()) except FileNotFoundError: raise ObjectNotFound(name) from None else: for p in paths: try: validate_name(p.name) except ValueError: pass # that file is likely not from us or is still uploading else: try: st = p.stat() except FileNotFoundError: pass else: is_dir = stat.S_ISDIR(st.st_mode) yield ItemInfo(name=p.name, exists=True, size=st.st_size, directory=is_dir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773551956.0 borgstore-0.4.0/src/borgstore/backends/rclone.py0000644000076500000240000002401115155440524020304 0ustar00twstaff""" BorgStore backend for rclone """ import os import re import subprocess import json import secrets from typing import Iterator import time import socket try: import requests except ImportError: requests = None from ._base import BackendBase, ItemInfo, validate_name from .errors import ( BackendError, BackendDoesNotExist, BackendMustNotBeOpen, BackendMustBeOpen, BackendAlreadyExists, ObjectNotFound, ) # rclone binary - expected to be on the path RCLONE = os.environ.get("RCLONE_BINARY", "rclone") # Debug HTTP requests and responses if False: import logging import http.client as http_client http_client.HTTPConnection.debuglevel = 1 logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True def get_rclone_backend(url): """Get rclone backend from URL. rclone:remote: rclone:remote:path """ if not url.startswith("rclone:"): return None if requests is None: raise BackendDoesNotExist( "The rclone backend requires dependencies. Install them with: 'pip install borgstore[rclone]'" ) try: # Check rclone is on the path info = json.loads(subprocess.check_output([RCLONE, "rc", "--loopback", "core/version"])) except Exception: raise BackendDoesNotExist("rclone binary not found on the path or not working properly") if info["decomposed"] < [1, 57, 0]: raise BackendDoesNotExist(f"rclone version must be at least v1.57.0 - found {info['version']}") rclone_regex = r""" rclone: (?P(.*)) """ m = re.match(rclone_regex, url, re.VERBOSE) if m: # no URL-unquote here, we just pass through the rclone remote spec "as is" return Rclone(path=m["path"]) class Rclone(BackendBase): """BorgStore backend for rclone. This uses the rclone rc API to control an rclone rcd process. """ precreate_dirs: bool = False HOST = "127.0.0.1" TRIES = 3 # try failed load/store operations this many times def __init__(self, path, *, do_fsync=False): if not path.endswith(":") and not path.endswith("/"): path += "/" self.fs = path self.process = None self.url = None self.user = "borg" self.password = secrets.token_urlsafe(32) def find_available_port(self): with socket.socket() as s: s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((self.HOST, 0)) return s.getsockname()[1] def check_port(self, port): with socket.socket() as s: try: s.connect((self.HOST, port)) return True except ConnectionRefusedError: return False def open(self): """ Start using the rclone server. """ if self.process: raise BackendMustNotBeOpen() while not self.process: port = self.find_available_port() # Open rclone rcd listening on a random port with random auth args = [ RCLONE, "rcd", "--rc-user", self.user, "--rc-addr", f"{self.HOST}:{port}", "--rc-serve", "--use-server-modtime", ] env = os.environ.copy() env["RCLONE_RC_PASS"] = self.password # pass password by env var so it isn't in process list self.process = subprocess.Popen( args, stderr=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stdin=subprocess.DEVNULL, env=env ) self.url = f"http://{self.HOST}:{port}/" # Wait for rclone to start up while self.process.poll() is None and not self.check_port(port): time.sleep(0.01) if self.process.poll() is None: self.noop("noop") else: self.process = None def close(self): """ Stop using the rclone server. """ if not self.process: raise BackendMustBeOpen() self.process.terminate() self.process = None self.url = None def _requests(self, fn, *args, tries=1, **kwargs): """ Run a call to the requests function fn with *args and **kwargs. It adds auth and decodes errors in a consistent way. It returns the response object. This will retry any 500 errors received from rclone 'tries' times, as these correspond to backend, protocol, or Internet errors. Note that rclone will retry all operations internally except those which stream data. """ if not self.process or not self.url: raise BackendMustBeOpen() for try_number in range(tries): r = fn(*args, auth=(self.user, self.password), **kwargs) if r.status_code in (200, 206): return r elif r.status_code == 404: raise ObjectNotFound(f"Not Found: error {r.status_code}: {r.text}") err = BackendError(f"rclone rc command failed: error {r.status_code}: {r.text}") if r.status_code != 500: break raise err def _rpc(self, command, json_input, **kwargs): """ Run the rclone command over the rclone API. Additional kwargs may be passed to requests. """ if not self.url: raise BackendMustBeOpen() r = self._requests(requests.post, self.url + command, json=json_input, **kwargs) return r.json() def create(self): """Create (initialize) the rclone storage.""" if self.process: raise BackendMustNotBeOpen() with self: try: if any(self.list("")): raise BackendAlreadyExists(f"rclone storage base path exists and isn't empty: {self.fs}") except ObjectNotFound: pass self.mkdir("") def destroy(self): """Completely remove the rclone storage (and its contents).""" if self.process: raise BackendMustNotBeOpen() with self: info = self.info("") if not info.exists: raise BackendDoesNotExist(f"rclone storage base path does not exist: {self.fs}") self._rpc("operations/purge", {"fs": self.fs, "remote": ""}) def __enter__(self): self.open() return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() return False def noop(self, value): """No-op request that returns the provided value.""" return self._rpc("rc/noop", {"value": value}) def mkdir(self, name: str) -> None: """Create directory/namespace .""" validate_name(name) self._rpc("operations/mkdir", {"fs": self.fs, "remote": name}) def rmdir(self, name: str) -> None: """Remove directory/namespace .""" validate_name(name) self._rpc("operations/rmdir", {"fs": self.fs, "remote": name}) def _to_item_info(self, remote, item): """Convert an rclone item at remote into a BorgStore ItemInfo.""" if item is None: return ItemInfo(name=os.path.basename(remote), exists=False, directory=False, size=0) name = item["Name"] size = item["Size"] directory = item["IsDir"] return ItemInfo(name=name, exists=True, size=size, directory=directory) def info(self, name) -> ItemInfo: """Return information about .""" validate_name(name) try: result = self._rpc( "operations/stat", {"fs": self.fs, "remote": name, "opt": {"recurse": False, "noModTime": True, "noMimeType": True}}, ) item = result["item"] except ObjectNotFound: item = None return self._to_item_info(name, item) def load(self, name: str, *, size=None, offset=0) -> bytes: """Load value from .""" validate_name(name) headers = {} if size is not None or offset > 0: if size is not None: headers["Range"] = f"bytes={offset}-{offset+size-1}" else: headers["Range"] = f"bytes={offset}-" r = self._requests(requests.get, f"{self.url}[{self.fs}]/{name}", tries=self.TRIES, headers=headers) return r.content def store(self, name: str, value: bytes) -> None: """Store into .""" validate_name(name) files = {"file": (os.path.basename(name), value, "application/octet-stream")} params = {"fs": self.fs, "remote": os.path.dirname(name)} self._rpc("operations/uploadfile", None, tries=self.TRIES, params=params, files=files) def delete(self, name: str) -> None: """Delete .""" validate_name(name) self._rpc("operations/deletefile", {"fs": self.fs, "remote": name}) def move(self, curr_name: str, new_name: str) -> None: """Rename curr_name to new_name (overwrite target).""" validate_name(curr_name) validate_name(new_name) self._rpc( "operations/movefile", {"srcFs": self.fs, "srcRemote": curr_name, "dstFs": self.fs, "dstRemote": new_name} ) def list(self, name: str) -> Iterator[ItemInfo]: """List the contents of , non-recursively. The yielded ItemInfos are sorted alphabetically by name. """ validate_name(name) result = self._rpc( "operations/list", {"fs": self.fs, "remote": name, "opt": {"recurse": False, "noModTime": True, "noMimeType": True}}, ) for item in result["list"]: name = item["Name"] try: validate_name(name) except ValueError: pass # that file is likely not from us or is still uploading else: yield self._to_item_info(name, item) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773551956.0 borgstore-0.4.0/src/borgstore/backends/rest.py0000644000076500000240000001725615155440524020014 0ustar00twstaff""" REST http client based backend implementation (use with borgstore.server.rest). """ import os import re from typing import Iterator, Dict, Optional from http import HTTPStatus as HTTP from urllib.parse import unquote try: import requests from requests.auth import HTTPBasicAuth except ImportError: requests = HTTPBasicAuth = None from ._base import BackendBase, ItemInfo, validate_name from .errors import ( ObjectNotFound, BackendAlreadyExists, BackendDoesNotExist, PermissionDenied, BackendError, BackendMustBeOpen, BackendMustNotBeOpen, ) def get_rest_backend(base_url: str): # http(s)://username:password@hostname:port/ or http(s)://hostname:port/ + auth from env # note: path component must be "/" (no sub-path allowed, as it would silently prepend to all item names) if not base_url.startswith(("http:", "https:")): return None if requests is None: raise BackendDoesNotExist( "The REST backend requires dependencies. Install them with: 'pip install borgstore[rest]'" ) http_regex = r""" (?Phttp|https):// ((?P[^:]+):(?P[^@]+)@)? (?P[^:/]+)(:(?P\d+))? (?P/) """ m = re.match(http_regex, base_url, re.VERBOSE) if m: scheme = m.group("scheme") host = m.group("host") port = m.group("port") path = m.group("path") base_url = f"{scheme}://{host}{f':{port}' if port else ''}{path}" username, password = m.group("username"), m.group("password") if username and password: username, password = unquote(username), unquote(password) else: username, password = os.environ.get("BORGSTORE_REST_USERNAME"), os.environ.get("BORGSTORE_REST_PASSWORD") return REST(base_url, username=username, password=password) class REST(BackendBase): def __init__( self, base_url: str, username: Optional[str] = None, password: Optional[str] = None, headers: Optional[Dict[str, str]] = None, timeout: Optional[int] = 30, ): self.base_url = base_url.rstrip("/") # _url method adds slash self.headers = headers or {} self.headers["Accept"] = "application/vnd.x.borgstore.rest.v1" self.timeout = timeout self.auth = HTTPBasicAuth(username, password) if username and password else None self.session = None def _url(self, path: str) -> str: return f"{self.base_url}/{path.lstrip('/')}" def _assert_open(self): if self.session is None: raise BackendMustBeOpen() def _assert_closed(self): if self.session is not None: raise BackendMustNotBeOpen() def _request(self, method, url, *, headers=None, data=None, params=None): if self.session is not None: # between .open() and .close() return self.session.request(method, url, params=params, data=data, headers=headers, timeout=self.timeout) else: # .create() and .destroy() are called when backend is not opened if headers is not None: raise ValueError("custom headers are not supported outside of an open session") return requests.request( method, url, auth=self.auth, params=params, data=data, headers=self.headers, timeout=self.timeout ) def _handle_response(self, response, name=None): if response.status_code == HTTP.OK: return if response.status_code == HTTP.PARTIAL_CONTENT: return if response.status_code == HTTP.NOT_FOUND: raise ObjectNotFound(name or "unknown") if response.status_code == HTTP.GONE: raise BackendDoesNotExist(self.base_url) if response.status_code == HTTP.CONFLICT: raise BackendAlreadyExists(self.base_url) if response.status_code == HTTP.PRECONDITION_FAILED: # Precondition failed, used for state errors if "must be open" in response.text: raise BackendMustBeOpen() if "must not be open" in response.text: raise BackendMustNotBeOpen() raise BackendError(response.text) if response.status_code == HTTP.FORBIDDEN: raise PermissionDenied(name or self.base_url) if response.status_code == HTTP.BAD_REQUEST: raise ValueError(response.text) response.raise_for_status() def create(self) -> None: self._assert_closed() response = self._request("post", self._url(""), params={"cmd": "create"}) self._handle_response(response, "backend") def destroy(self) -> None: self._assert_closed() response = self._request("delete", self._url(""), params={"cmd": "destroy"}) self._handle_response(response, "backend") def open(self): self._assert_closed() self.session = requests.Session() self.session.auth = self.auth self.session.headers.update(self.headers) def close(self): self._assert_open() self.session.close() self.session = None def mkdir(self, name: str) -> None: self._assert_open() validate_name(name) response = self._request("post", self._url(name), params={"cmd": "mkdir"}) self._handle_response(response, name) def rmdir(self, name: str) -> None: self._assert_open() validate_name(name) response = self._request("delete", self._url(name), params={"cmd": "rmdir"}) self._handle_response(response, name) def info(self, name: str) -> ItemInfo: self._assert_open() validate_name(name) response = self._request("head", self._url(name)) if response.status_code not in (HTTP.OK, HTTP.NOT_FOUND): self._handle_response(response, name) # raises! exists = response.status_code == HTTP.OK is_dir = response.headers.get("X-BorgStore-Is-Directory") == "true" return ItemInfo(name=name, exists=exists, size=int(response.headers.get("Content-Length", 0)), directory=is_dir) def load(self, name: str, *, size=None, offset=0) -> bytes: self._assert_open() validate_name(name) r_hdr = (None if not offset else f"bytes={offset}-") if size is None else f"bytes={offset}-{offset + size - 1}" headers = self.headers.copy() if r_hdr: headers["Range"] = r_hdr response = self._request("get", self._url(name), headers=headers) self._handle_response(response, name) return response.content def store(self, name: str, value: bytes) -> None: self._assert_open() validate_name(name) response = self._request("post", self._url(name), data=value) self._handle_response(response, name) def delete(self, name: str) -> None: self._assert_open() validate_name(name) response = self._request("delete", self._url(name)) self._handle_response(response, name) def move(self, curr_name: str, new_name: str) -> None: self._assert_open() validate_name(curr_name) validate_name(new_name) response = self._request("post", self._url(""), params={"cmd": "move", "current": curr_name, "new": new_name}) self._handle_response(response, f"{curr_name} -> {new_name}") def list(self, name: str) -> Iterator[ItemInfo]: self._assert_open() validate_name(name) response = self._request("get", self._url(name) + "/") # trailing "/" needed to get list self._handle_response(response, name) for entry in response.json(): yield ItemInfo(name=entry["name"], exists=True, size=entry["size"], directory=entry.get("directory", False)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773360264.0 borgstore-0.4.0/src/borgstore/backends/s3.py0000644000076500000240000002760115154652210017353 0ustar00twstaff""" BorgStore backend for S3-compatible services (including Backblaze B2) using boto3. """ try: import boto3 from botocore.client import Config except ImportError: boto3 = None import re from typing import Optional import urllib.parse from ._base import BackendBase, ItemInfo, validate_name from .errors import BackendError, BackendMustBeOpen, BackendMustNotBeOpen, BackendDoesNotExist, BackendAlreadyExists from .errors import ObjectNotFound def get_s3_backend(url: str): """Get S3 backend from URL. Supports URLs of the form: (s3|b2):[profile|(access_key_id:access_key_secret)@][schema://hostname[:port]]/bucket/path """ if not url.startswith(("s3:", "b2:")): return None if boto3 is None: raise BackendDoesNotExist( "The S3 backend requires dependencies. Install them with: 'pip install borgstore[s3]'" ) # (s3|b2):[profile|(access_key_id:access_key_secret)@][schema://hostname[:port]]/bucket/path s3_regex = r""" (?P(s3|b2)): (( (?P[^@:]+) # profile (no colons allowed) | (?P[^:@]+):(?P[^@]+) # access key and secret )@)? # optional authentication ( (?P[^:/]+):// (?P[^:/]+) (:(?P\d+))? )? # optional endpoint / (?P[^/]+)/ # bucket name (?P.+) # path """ m = re.match(s3_regex, url, re.VERBOSE) if m: s3type = m["s3type"] profile = m["profile"] access_key_id = m["access_key_id"] access_key_secret = m["access_key_secret"] if profile is not None and access_key_id is not None: raise BackendError("S3: profile and access_key_id cannot be specified at the same time") if access_key_id is not None and access_key_secret is None: raise BackendError("S3: access_key_secret is mandatory when access_key_id is specified") if access_key_id is not None: access_key_id = urllib.parse.unquote(access_key_id) if access_key_secret is not None: access_key_secret = urllib.parse.unquote(access_key_secret) schema = m["schema"] hostname = m["hostname"] port = m["port"] bucket = m["bucket"] # no unquote: all valid bucket characters are URL-safe path = urllib.parse.unquote(m["path"]) endpoint_url = None if schema and hostname: endpoint_url = f"{schema}://{hostname}" if port: endpoint_url += f":{port}" return S3( bucket=bucket, path=path, is_b2=s3type == "b2", profile=profile, access_key_id=access_key_id, access_key_secret=access_key_secret, endpoint_url=endpoint_url, ) class S3(BackendBase): """BorgStore backend for S3 and Backblaze B2 (via boto3).""" def __init__( self, bucket: str, path: str, is_b2: bool, profile: Optional[str] = None, access_key_id: Optional[str] = None, access_key_secret: Optional[str] = None, endpoint_url: Optional[str] = None, ): self.delimiter = "/" self.bucket = bucket self.base_path = path.rstrip(self.delimiter) + self.delimiter # Ensure it ends with '/' self.opened = False if profile: session = boto3.Session(profile_name=profile) elif access_key_id and access_key_secret: session = boto3.Session(aws_access_key_id=access_key_id, aws_secret_access_key=access_key_secret) else: session = boto3.Session() config = None if is_b2: config = Config(request_checksum_calculation="when_required", response_checksum_validation="when_required") self.s3 = session.client("s3", endpoint_url=endpoint_url, config=config) if is_b2: event_system = self.s3.meta.events event_system.register_first("before-sign.*.*", self._fix_headers) def _fix_headers(self, request, **kwargs): if "x-amz-checksum-crc32" in request.headers: del request.headers["x-amz-checksum-crc32"] if "x-amz-sdk-checksum-algorithm" in request.headers: del request.headers["x-amz-sdk-checksum-algorithm"] def _mkdir(self, name): try: key = (self.base_path + name).rstrip(self.delimiter) + self.delimiter self.s3.put_object(Bucket=self.bucket, Key=key) except self.s3.exceptions.ClientError as e: raise BackendError(f"S3 error: {e}") def create(self): if self.opened: raise BackendMustNotBeOpen() try: objects = self.s3.list_objects_v2( Bucket=self.bucket, Prefix=self.base_path, Delimiter=self.delimiter, MaxKeys=1 ) if objects["KeyCount"] > 0: raise BackendAlreadyExists(f"Backend already exists: {self.base_path}") self._mkdir("") except self.s3.exceptions.NoSuchBucket: raise BackendDoesNotExist(f"S3 bucket does not exist: {self.bucket}") except self.s3.exceptions.ClientError as e: raise BackendError(f"S3 error: {e}") def destroy(self): if self.opened: raise BackendMustNotBeOpen() try: objects = self.s3.list_objects_v2( Bucket=self.bucket, Prefix=self.base_path, Delimiter=self.delimiter, MaxKeys=1 ) if objects["KeyCount"] == 0: raise BackendDoesNotExist(f"Backend does not exist: {self.base_path}") is_truncated = True while is_truncated: objects = self.s3.list_objects_v2(Bucket=self.bucket, Prefix=self.base_path, MaxKeys=1000) is_truncated = objects["IsTruncated"] if "Contents" in objects: self.s3.delete_objects( Bucket=self.bucket, Delete={"Objects": [{"Key": obj["Key"]} for obj in objects["Contents"]]} ) except self.s3.exceptions.ClientError as e: raise BackendError(f"S3 error: {e}") def open(self): if self.opened: raise BackendMustNotBeOpen() self.opened = True def close(self): if not self.opened: raise BackendMustBeOpen() self.opened = False def store(self, name, value): if not self.opened: raise BackendMustBeOpen() validate_name(name) key = self.base_path + name self.s3.put_object(Bucket=self.bucket, Key=key, Body=value) def load(self, name, *, size=None, offset=0): if not self.opened: raise BackendMustBeOpen() validate_name(name) key = self.base_path + name try: if size is None and offset == 0: obj = self.s3.get_object(Bucket=self.bucket, Key=key) return obj["Body"].read() elif size is not None and offset == 0: obj = self.s3.get_object(Bucket=self.bucket, Key=key, Range=f"bytes=0-{size - 1}") return obj["Body"].read() elif size is None and offset != 0: head = self.s3.head_object(Bucket=self.bucket, Key=key) length = head["ContentLength"] obj = self.s3.get_object(Bucket=self.bucket, Key=key, Range=f"bytes={offset}-{length - 1}") return obj["Body"].read() elif size is not None and offset != 0: obj = self.s3.get_object(Bucket=self.bucket, Key=key, Range=f"bytes={offset}-{offset + size - 1}") return obj["Body"].read() except self.s3.exceptions.NoSuchKey: raise ObjectNotFound(name) def delete(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) key = self.base_path + name try: self.s3.head_object(Bucket=self.bucket, Key=key) self.s3.delete_object(Bucket=self.bucket, Key=key) except self.s3.exceptions.NoSuchKey: raise ObjectNotFound(name) except self.s3.exceptions.ClientError as e: if e.response["Error"]["Code"] == "404": raise ObjectNotFound(name) def move(self, curr_name, new_name): if not self.opened: raise BackendMustBeOpen() validate_name(curr_name) validate_name(new_name) src_key = self.base_path + curr_name dest_key = self.base_path + new_name try: self.s3.copy_object(Bucket=self.bucket, CopySource={"Bucket": self.bucket, "Key": src_key}, Key=dest_key) self.s3.delete_object(Bucket=self.bucket, Key=src_key) except self.s3.exceptions.NoSuchKey: raise ObjectNotFound(curr_name) def list(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) base_prefix = (self.base_path + name).rstrip(self.delimiter) + self.delimiter try: start_after = "" is_truncated = True while is_truncated: objects = self.s3.list_objects_v2( Bucket=self.bucket, Prefix=base_prefix, Delimiter=self.delimiter, MaxKeys=1000, StartAfter=start_after, ) if objects["KeyCount"] == 0: raise ObjectNotFound(name) is_truncated = objects["IsTruncated"] for obj in objects.get("Contents", []): obj_name = obj["Key"][len(base_prefix) :] # Remove base_path prefix if obj_name == "": continue try: validate_name(obj_name) except ValueError: pass # that file is likely not from us or is still uploading else: start_after = obj["Key"] yield ItemInfo(name=obj_name, exists=True, size=obj["Size"], directory=False) for prefix in objects.get("CommonPrefixes", []): dir_name = prefix["Prefix"][len(base_prefix) : -1] # Remove base_path prefix and trailing slash yield ItemInfo(name=dir_name, exists=True, size=0, directory=True) except self.s3.exceptions.ClientError as e: raise BackendError(f"S3 error: {e}") def mkdir(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) self._mkdir(name) def rmdir(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) prefix = self.base_path + name.rstrip(self.delimiter) + self.delimiter objects = self.s3.list_objects_v2(Bucket=self.bucket, Prefix=prefix, Delimiter=self.delimiter, MaxKeys=2) if "Contents" in objects and len(objects["Contents"]) > 1: raise BackendError(f"Directory not empty: {name}") self.s3.delete_object(Bucket=self.bucket, Key=prefix) def info(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) key = self.base_path + name try: obj = self.s3.head_object(Bucket=self.bucket, Key=key) return ItemInfo(name=name, exists=True, directory=False, size=obj["ContentLength"]) except self.s3.exceptions.ClientError as e: if e.response["Error"]["Code"] == "404": try: self.s3.head_object(Bucket=self.bucket, Key=key + self.delimiter) return ItemInfo(name=name, exists=True, directory=True, size=0) except self.s3.exceptions.ClientError: pass return ItemInfo(name=name, exists=False, directory=False, size=0) raise BackendError(f"S3 error: {e}") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773360264.0 borgstore-0.4.0/src/borgstore/backends/sftp.py0000644000076500000240000003141615154652210020001 0ustar00twstaff""" SFTP-based backend implementation — on an SFTP server, uses files in directories below a base path. """ from pathlib import Path from urllib.parse import unquote import random import re import stat from typing import Optional try: import paramiko except ImportError: paramiko = None from ._base import BackendBase, ItemInfo, validate_name from .errors import BackendError, BackendMustBeOpen, BackendMustNotBeOpen, BackendDoesNotExist, BackendAlreadyExists from .errors import ObjectNotFound from ..constants import TMP_SUFFIX def get_sftp_backend(url): """Get SFTP backend from URL.""" if not url.startswith("sftp:"): return None if paramiko is None: raise BackendDoesNotExist( "The SFTP backend requires dependencies. Install them with: 'pip install borgstore[sftp]'" ) # sftp://username@hostname:22/path # Notes: # - username and port are optional # - host must be a hostname (not an IP address) # - you must provide a path; by default it is a relative path (usually relative to the user's home directory — # this allows the SFTP server admin to move things without the user needing to know). # - giving an absolute path is also possible: sftp://username@hostname:22//home/username/borgstore sftp_regex = r""" sftp:// ((?P[^@]+)@)? (?P([^:/]+))(?::(?P\d+))?/ # slash as separator, not part of the path (?P(.+)) # path may or may not start with a slash, must not be empty """ m = re.match(sftp_regex, url, re.VERBOSE) if m: return Sftp( username=unquote(m["username"]) if m["username"] else None, hostname=m["hostname"], port=int(m["port"] or "0"), path=unquote(m["path"]), ) class Sftp(BackendBase): """BorgStore backend for SFTP.""" # Sftp implementation supports precreate = True as well as = False. precreate_dirs: bool = False def __init__(self, hostname: str, path: str, port: int = 0, username: Optional[str] = None): self.username = username self.hostname = hostname self.port = port self.base_path = path self.opened = False if paramiko is None: raise BackendError("sftp backend unavailable: could not import paramiko!") def _get_host_config_from_file(self, path: str, hostname: str): """Look up the configuration for hostname in path (SSH config file).""" config_path = Path(path).expanduser() try: ssh_config = paramiko.SSHConfig.from_path(config_path) except FileNotFoundError: return paramiko.SSHConfigDict() # empty dict else: return ssh_config.lookup(hostname) def _get_host_config(self): """Assemble all provided and configured host configuration values.""" host_config = paramiko.SSHConfigDict() # self.hostname might be an alias/shortcut (with real hostname given in configuration), # but there might be also nothing in the configs at all for self.hostname: host_config["hostname"] = self.hostname # First process system-wide SSH config, then override with user SSH config: host_config.update(self._get_host_config_from_file("/etc/ssh/ssh_config", self.hostname)) # Note: no support yet for /etc/ssh/ssh_config.d/* host_config.update(self._get_host_config_from_file("~/.ssh/config", self.hostname)) # Now override configured values with provided values if self.username is not None: host_config.update({"user": self.username}) if self.port != 0: host_config.update({"port": self.port}) # Make sure port is present and is an int host_config["port"] = int(host_config.get("port") or 22) return host_config def _connect(self): ssh = paramiko.SSHClient() # Note: we do not deal with unknown hosts and ssh.set_missing_host_key_policy here. # The user should make the first contact to any new host using the ssh or sftp CLI command # and interactively verify remote host fingerprints. ssh.load_system_host_keys() # This is documented to load the user's known_hosts file host_config = self._get_host_config() ssh.connect( hostname=host_config["hostname"], username=host_config.get("user"), # if None, paramiko will use current user port=host_config["port"], key_filename=host_config.get("identityfile"), # list of keys, ~ is already expanded allow_agent=True, ) self.client = ssh.open_sftp() def _disconnect(self): self.client.close() self.client = None def create(self): if self.opened: raise BackendMustNotBeOpen() self._connect() try: # We accept an already existing empty directory and we also optionally create # any missing parent dirs. The latter is important for repository hosters that # only offer limited access to their storage (e.g., only via borg/borgstore). # It is also simpler than requiring users to create parent dirs separately. self._mkdir(self.base_path, exist_ok=True, parents=True) # Prevent users from creating a mess by using non-empty directories: contents = list(self.client.listdir(self.base_path)) if contents: raise BackendAlreadyExists(f"sftp storage base path is not empty: {self.base_path}") except IOError as err: raise BackendError(f"sftp storage I/O error: {err}") finally: self._disconnect() def destroy(self): def delete_recursive(path): parent = Path(path) for child_st in self.client.listdir_attr(str(parent)): child = parent / child_st.filename if stat.S_ISDIR(child_st.st_mode): delete_recursive(child) else: self.client.unlink(str(child)) try: self.client.rmdir(str(parent)) except OSError as e: # usually, this is because of missing permissions. if path != self.base_path: raise e from None # do not raise if we can't remove the base path directory. # .create accepts an already existing base path, thus # .destroy may leave an existing base path behind. if self.opened: raise BackendMustNotBeOpen() self._connect() try: try: st = self.client.stat(self.base_path) # check if this storage exists, fail early if not. except FileNotFoundError: raise BackendDoesNotExist(f"sftp storage base path does not exist: {self.base_path}") from None delete_recursive(self.base_path) finally: self._disconnect() def open(self): if self.opened: raise BackendMustNotBeOpen() self._connect() try: st = self.client.stat(self.base_path) # check if this storage exists, fail early if not. except FileNotFoundError: raise BackendDoesNotExist(f"sftp storage base path does not exist: {self.base_path}") from None if not stat.S_ISDIR(st.st_mode): raise BackendDoesNotExist(f"sftp storage base path is not a directory: {self.base_path}") self.client.chdir(self.base_path) # this sets the cwd we work in! self.opened = True def close(self): if not self.opened: raise BackendMustBeOpen() self._disconnect() self.opened = False def _mkdir(self, name, *, parents=False, exist_ok=False): # Path.mkdir, but via sftp p = Path(name) try: self.client.mkdir(str(p)) except FileNotFoundError: # the parent dir is missing if not parents: raise # first create parent dir(s), recursively: self._mkdir(p.parents[0], parents=parents, exist_ok=exist_ok) # then retry: self.client.mkdir(str(p)) except OSError: # maybe p already existed? if not exist_ok: raise def mkdir(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) self._mkdir(name, parents=True, exist_ok=True) def rmdir(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) try: self.client.rmdir(name) except FileNotFoundError: raise ObjectNotFound(name) from None def info(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) try: st = self.client.stat(name) except FileNotFoundError: return ItemInfo(name=name, exists=False, directory=False, size=0) else: is_dir = stat.S_ISDIR(st.st_mode) return ItemInfo(name=name, exists=True, directory=is_dir, size=st.st_size) def load(self, name, *, size=None, offset=0): if not self.opened: raise BackendMustBeOpen() validate_name(name) try: with self.client.open(name) as f: f.seek(offset) f.prefetch(size) # speeds up the following read() significantly! return f.read(size) except FileNotFoundError: raise ObjectNotFound(name) from None def store(self, name, value): def _write_to_tmpfile(): with self.client.open(tmp_name, mode="w") as f: f.set_pipelined(True) # speeds up the following write() significantly! f.write(value) if not self.opened: raise BackendMustBeOpen() validate_name(name) tmp_dir = Path(name).parent # write to a differently named temp file in same directory first, # so the store never sees partially written data. tmp_name = str(tmp_dir / ("".join(random.choices("abcdefghijklmnopqrstuvwxyz", k=8)) + TMP_SUFFIX)) try: # try to do it quickly, not doing the mkdir. each sftp op might be slow due to latency. # this will frequently succeed, because the dir is already there. _write_to_tmpfile() except FileNotFoundError: # retry, create potentially missing dirs first. this covers these cases: # - either the dirs were not precreated # - a previously existing directory was "lost" in the filesystem self._mkdir(str(tmp_dir), parents=True, exist_ok=True) _write_to_tmpfile() # rename it to the final name: try: self.client.posix_rename(tmp_name, name) except OSError: self.client.unlink(tmp_name) raise def delete(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) try: self.client.unlink(name) except FileNotFoundError: raise ObjectNotFound(name) from None def move(self, curr_name, new_name): def _rename_to_new_name(): self.client.posix_rename(curr_name, new_name) if not self.opened: raise BackendMustBeOpen() validate_name(curr_name) validate_name(new_name) parent_dir = Path(new_name).parent try: # try to do it quickly, not doing the mkdir. each sftp op might be slow due to latency. # this will frequently succeed, because the dir is already there. _rename_to_new_name() except FileNotFoundError: # retry, create potentially missing dirs first. this covers these cases: # - either the dirs were not precreated # - a previously existing directory was "lost" in the filesystem self._mkdir(str(parent_dir), parents=True, exist_ok=True) try: _rename_to_new_name() except FileNotFoundError: raise ObjectNotFound(curr_name) from None def list(self, name): if not self.opened: raise BackendMustBeOpen() validate_name(name) try: infos = self.client.listdir_attr(name) except FileNotFoundError: raise ObjectNotFound(name) from None else: for info in sorted(infos, key=lambda i: i.filename): try: validate_name(info.filename) except ValueError: pass # that file is likely not from us or is still uploading else: is_dir = stat.S_ISDIR(info.st_mode) yield ItemInfo(name=info.filename, exists=True, size=info.st_size, directory=is_dir) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1757407943.0 borgstore-0.4.0/src/borgstore/constants.py0000644000076500000240000000070015057765307017275 0ustar00twstaff"""Constants used by BorgStore.""" # Namespace to pass to list() for the storage root: ROOTNS = "" # Filename suffixes used for special purposes TMP_SUFFIX = ".tmp" # Temporary file while being uploaded/written DEL_SUFFIX = ".del" # "Soft-deleted" item; can be undeleted # Maximum name length (not precise; suffixes might be added!) MAX_NAME_LENGTH = 100 # Being rather conservative here to improve portability between backends and platforms ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3640409 borgstore-0.4.0/src/borgstore/server/0000755000076500000240000000000015155516137016212 5ustar00twstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773549086.0 borgstore-0.4.0/src/borgstore/server/__init__.py0000644000076500000240000000004415155433036020315 0ustar00twstaff""" BorgStore HTTP REST server. """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773549086.0 borgstore-0.4.0/src/borgstore/server/rest.py0000644000076500000240000003146615155433036017547 0ustar00twstaffimport argparse import json import base64 import logging from http import HTTPStatus as HTTP from http.server import ThreadingHTTPServer, BaseHTTPRequestHandler from urllib.parse import urlsplit, parse_qs from ..backends.errors import ( ObjectNotFound, BackendAlreadyExists, BackendDoesNotExist, PermissionDenied, BackendError, BackendMustBeOpen, BackendMustNotBeOpen, ) from ..store import get_backend logger = logging.getLogger(__name__) class BorgStoreRESTRequestHandler(BaseHTTPRequestHandler): protocol_version = "HTTP/1.1" def _log(self, format, args, level=logging.INFO): addr = self.address_string() dt = self.log_date_time_string() user = self.server.username or "-" request_details = format % args logger.log(level, "%s %s %s [%s] %s" % (addr, "-", user, dt, request_details)) def log_message(self, format, *args): self._log(format, args, logging.INFO) def log_error(self, format, *args): # usually this is pretty useless and redundant, thus we only log it at debug level. self._log(format, args, logging.DEBUG) @staticmethod def checks_and_logging(func): def wrapper(self): if not self._check_accept(): return if not self._check_auth(): return self._send_unauthorized() return func(self) return wrapper def _check_auth(self): if not self.server.username or not self.server.password: return True auth_header = self.headers.get("Authorization") if not auth_header: return False scheme, _, encoded_credentials = auth_header.partition(" ") if scheme.lower() != "basic": return False try: decoded_credentials = base64.b64decode(encoded_credentials).decode("utf-8") username, _, password = decoded_credentials.partition(":") authorized = username == self.server.username and password == self.server.password return authorized except Exception: logger.exception("Authentication code crashed, returning: unauthorized.") return False def respond(self, status=HTTP.OK, data=None, content_type=None, headers=None): self.send_response(status) if content_type: self.send_header("Content-Type", content_type) if headers: for key, value in headers.items(): self.send_header(key, value) if data is not None: self.send_header("Content-Length", str(len(data))) elif not headers or "Content-Length" not in headers: self.send_header("Content-Length", "0") self.end_headers() if data is not None and self.command != "HEAD": self.wfile.write(data) def _send_unauthorized(self): self.respond( HTTP.UNAUTHORIZED, data=b"Unauthorized", headers={"WWW-Authenticate": 'Basic realm="BorgStore REST Server"'} ) def _check_accept(self): accept = self.headers.get("Accept") if accept != "application/vnd.x.borgstore.rest.v1": msg = "Not Acceptable: unsupported or missing Accept header" self.send_error(HTTP.NOT_ACCEPTABLE, msg) return False return True @property def split_url(self): return urlsplit(self.path) @property def query(self): return parse_qs(self.split_url.query) @property def name(self): return self.split_url.path.strip("/") def _handle_exception(self, e, name=None): if isinstance(e, ObjectNotFound): self.send_error(HTTP.NOT_FOUND, str(e)) elif isinstance(e, BackendDoesNotExist): self.send_error(HTTP.GONE, str(e)) elif isinstance(e, BackendAlreadyExists): self.send_error(HTTP.CONFLICT, str(e)) elif isinstance(e, (BackendMustBeOpen, BackendMustNotBeOpen)): self.send_error(HTTP.PRECONDITION_FAILED, str(e)) elif isinstance(e, PermissionDenied): self.send_error(HTTP.FORBIDDEN, str(e)) elif isinstance(e, (ValueError, TypeError)): self.send_error(HTTP.BAD_REQUEST, str(e)) logger.exception("Exception for %s", name or self.path) elif isinstance(e, BackendError): self.send_error(HTTP.INTERNAL_SERVER_ERROR, str(e)) logger.exception("Exception for %s", name or self.path) else: self.send_error(HTTP.INTERNAL_SERVER_ERROR, "Internal Server Error") logger.exception("Exception for %s", name or self.path) @checks_and_logging def do_POST(self): cmd = self.query.get("cmd", [None])[0] if cmd == "create": try: self.server.backend.create() self.respond(HTTP.OK) except Exception as e: self._handle_exception(e, "create") return if cmd == "move": current = self.query.get("current", [None])[0] new = self.query.get("new", [None])[0] if current and new: try: with self.server.backend: self.server.backend.move(current, new) self.respond(HTTP.OK) except Exception as e: self._handle_exception(e, f"move {current} -> {new}") else: self.send_error(HTTP.BAD_REQUEST, "Missing current or new name for move") return if cmd == "mkdir": try: with self.server.backend: self.server.backend.mkdir(self.name) self.respond(HTTP.OK) except Exception as e: self._handle_exception(e, f"mkdir {self.name}") return if self.name: try: content_length = int(self.headers.get("Content-Length", 0)) data = self.rfile.read(content_length) with self.server.backend: self.server.backend.store(self.name, data) self.respond(HTTP.OK) except Exception as e: self._handle_exception(e, self.name) return self.send_error(HTTP.BAD_REQUEST, "Bad Request") @checks_and_logging def do_DELETE(self): cmd = self.query.get("cmd", [None])[0] if cmd == "rmdir": try: with self.server.backend: self.server.backend.rmdir(self.name) self.respond(HTTP.OK) except Exception as e: self._handle_exception(e, f"rmdir {self.name}") return if cmd == "destroy": try: self.server.backend.destroy() self.respond(HTTP.OK) except Exception as e: self._handle_exception(e, "destroy") return if not self.name: self.send_error(HTTP.BAD_REQUEST, "Bad Request") return try: with self.server.backend: self.server.backend.delete(self.name) self.respond(HTTP.OK) except Exception as e: self._handle_exception(e, self.name) @checks_and_logging def do_HEAD(self): if not self.name: self.send_error(HTTP.BAD_REQUEST, "Bad Request") return try: with self.server.backend: info = self.server.backend.info(self.name) if not info.exists: raise ObjectNotFound(self.name) self.respond( HTTP.OK, headers={ "Content-Length": str(info.size), "X-BorgStore-Is-Directory": "true" if info.directory else "false", }, ) except Exception as e: self._handle_exception(e, self.name) @checks_and_logging def do_GET(self): # List directory if self.split_url.path.endswith("/"): try: # send a JSON list of objects # [{"name": "...", "size": ...}, ...] with self.server.backend: items = list(self.server.backend.list(self.name)) json_data = json.dumps( [{"name": item.name, "size": item.size, "directory": item.directory} for item in items], indent=2 ) response_data = json_data.encode("utf-8") self.respond(HTTP.OK, data=response_data, content_type="application/json") except Exception as e: self._handle_exception(e, self.name) return # Load object if not self.name: self.send_error(HTTP.BAD_REQUEST, "Bad Request") return try: range_header = self.headers.get("Range") offset = 0 size = None if range_header and range_header.startswith("bytes="): # Simple Range: bytes=OFFSET- or bytes=OFFSET-END try: range_val = range_header.split("=")[1] start_str, end_str = range_val.split("-") offset = int(start_str) if end_str: size = int(end_str) - offset + 1 except ValueError: pass with self.server.backend: data = self.server.backend.load(self.name, offset=offset, size=size) self.respond( HTTP.PARTIAL_CONTENT if range_header else HTTP.OK, data=data, content_type="application/octet-stream" ) except Exception as e: self._handle_exception(e, self.name) class BorgStoreRESTServer(ThreadingHTTPServer): disable_nagle_algorithm = True # aka TCP_NODELAY, reduces latency def __init__(self, server_address, backend, username=None, password=None): self.backend = backend self.username = username self.password = password super().__init__(server_address, BorgStoreRESTRequestHandler) PERMISSION_SHORTCUTS = { # these are for borgbackup, see borg.repository.Repository.__init__ "borgbackup-all": None, # permissions system will not be used "borgbackup-no-delete": { # mostly no delete, no overwrite "": "lr", "archives": "lrw", "cache": "lrwWD", # WD for chunks., last-key-checked, ... "config": "lrW", # W for manifest "data": "lrw", "keys": "lr", "locks": "lrwD", # borg needs to create/delete a shared lock here }, "borgbackup-write-only": { # mostly no reading "": "l", "archives": "lw", "cache": "lrwWD", # read allowed, e.g. for chunks. cache "config": "lrW", # W for manifest "data": "lw", # no r! "keys": "lr", "locks": "lrwD", # borg needs to create/delete a shared lock here }, "borgbackup-read-only": {"": "lr", "locks": "lrwD"}, # mostly r/o } def resolve_permissions(permissions): """Resolve a permissions shortcut name or JSON string to a permissions dict (or None).""" if permissions is None: return None if permissions in PERMISSION_SHORTCUTS: return PERMISSION_SHORTCUTS[permissions] # Try to parse as JSON try: return json.loads(permissions) except json.JSONDecodeError: valid = ", ".join(PERMISSION_SHORTCUTS) raise ValueError(f"Invalid --permissions value: {permissions!r}. Use a shortcut ({valid}) or a JSON object.") def serve(host, port, backend_url, username=None, password=None, permissions=None): backend = get_backend(backend_url, permissions=permissions) if backend is None: raise ValueError(f"Invalid backend URL: {backend_url}") server = BorgStoreRESTServer((host, port), backend, username, password) logger.info(f"BorgStore REST server listening on {host}:{port}") try: server.serve_forever() except KeyboardInterrupt: pass finally: server.server_close() if __name__ == "__main__": logging.basicConfig(level=logging.INFO, format="%(message)s") logger.setLevel(logging.INFO) parser = argparse.ArgumentParser(description="BorgStore REST Server") parser.add_argument("--host", default="127.0.0.1", help="Address/hostname to listen on") parser.add_argument("--port", type=int, default=5618, help="Port to listen on (default: 5618)") parser.add_argument("--backend", required=True, help="Backend URL (e.g. file:///tmp/store)") parser.add_argument("--username", help="Basic Auth username") parser.add_argument("--password", help="Basic Auth password") parser.add_argument("--permissions", help="Permissions: a shortcut name or a JSON object string.") args = parser.parse_args() permissions = resolve_permissions(args.permissions) serve(args.host, args.port, args.backend, args.username, args.password, permissions) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773549086.0 borgstore-0.4.0/src/borgstore/store.py0000644000076500000240000003322215155433036016410 0ustar00twstaff""" Key/value store implementation. The Store uses a backend to store key/value data and adds some functionality: - backend creation from a URL - configurable nesting - recursive list method - soft deletion """ from binascii import hexlify from collections import Counter from contextlib import contextmanager import logging import os import time from typing import Iterator, Optional from .utils.nesting import nest from .backends._base import ItemInfo, BackendBase from .backends.errors import ObjectNotFound, NoBackendGiven, BackendURLInvalid # noqa from .backends.posixfs import get_file_backend from .backends.rclone import get_rclone_backend from .backends.sftp import get_sftp_backend from .backends.s3 import get_s3_backend from .backends.rest import get_rest_backend from .constants import DEL_SUFFIX logger = logging.getLogger(__name__) def get_backend(url, permissions=None): """Parse backend URL and return a backend instance (or None).""" backend = get_file_backend(url, permissions=permissions) if backend is not None: return backend if permissions is not None: raise ValueError("Permissions are only supported for the 'file:' backend.") backend = get_sftp_backend(url) if backend is not None: return backend backend = get_rclone_backend(url) if backend is not None: return backend backend = get_s3_backend(url) if backend is not None: return backend backend = get_rest_backend(url) if backend is not None: return backend class Store: def __init__( self, url: Optional[str] = None, backend: Optional[BackendBase] = None, levels: Optional[dict] = None, permissions: Optional[dict] = None, ): self.url = url if backend is None and url is not None: backend = get_backend(url, permissions=permissions) if backend is None: raise BackendURLInvalid(f"Invalid or unsupported Backend Storage URL: {url}") if backend is None: raise NoBackendGiven("You need to give a backend instance or a backend url.") self.backend = backend self.set_levels(levels) self._stats: Counter = Counter() # this is to emulate additional latency to what the backend actually offers: self.latency = float(os.environ.get("BORGSTORE_LATENCY", "0")) / 1e6 # [us] -> [s] # this is to emulate less bandwidth than what the backend actually offers: self.bandwidth = float(os.environ.get("BORGSTORE_BANDWIDTH", "0")) / 8 # [bits/s] -> [bytes/s] def __repr__(self): return f"" def set_levels(self, levels: dict, create: bool = False) -> None: if not levels or not isinstance(levels, dict): raise ValueError("No or invalid levels configuration given.") # we accept levels as a dict, but we rather want a list of (namespace, levels) tuples, longest namespace first: self.levels = [entry for entry in sorted(levels.items(), key=lambda item: len(item[0]), reverse=True)] if create: self.create_levels() def create_levels(self): """creating any needed namespaces / directory in advance""" # doing that saves a lot of ad-hoc mkdir calls, which is especially important # for backends with high latency or other noticeable costs of mkdir. with self: for namespace, levels in self.levels: namespace = namespace.rstrip("/") level = max(levels) if level == 0: # flat, we just need to create the namespace directory: self.backend.mkdir(namespace) elif level > 0: # nested, we only need to create the deepest nesting dir layer, # any missing parent dirs will be created as needed by backend.mkdir. limit = 2 ** (level * 8) for i in range(limit): dir = hexlify(i.to_bytes(length=level, byteorder="big")).decode("ascii") name = f"{namespace}/{dir}" if namespace else dir nested_name = nest(name, level) self.backend.mkdir(nested_name[: -2 * level - 1]) else: raise ValueError(f"Invalid levels: {namespace}: {levels}") def create(self) -> None: self.backend.create() if self.backend.precreate_dirs: self.create_levels() def destroy(self) -> None: self.backend.destroy() def __enter__(self): self.open() return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() return False def open(self) -> None: self.backend.open() def close(self) -> None: self.backend.close() @contextmanager def _stats_updater(self, key, msg): """update call counters and overall times, also emulate latency and bandwidth""" # do not use this in generators! volume_before = self._stats_get_volume(key) start = time.perf_counter_ns() yield be_needed_ns = time.perf_counter_ns() - start volume_after = self._stats_get_volume(key) volume = volume_after - volume_before emulated_time = self.latency + (0 if not self.bandwidth else float(volume) / self.bandwidth) remaining_time = emulated_time - be_needed_ns / 1e9 if remaining_time > 0.0: time.sleep(remaining_time) end = time.perf_counter_ns() overall_time = end - start self._stats[f"{key}_calls"] += 1 self._stats[f"{key}_time"] += overall_time logger.debug(f"borgstore: {msg} -> {volume}B in {overall_time / 1e6:0.1f}ms") def _stats_update_volume(self, key, amount): self._stats[f"{key}_volume"] += amount def _stats_get_volume(self, key): return self._stats.get(f"{key}_volume", 0) @property def stats(self): """ Return statistics such as method call counters, overall time [s], overall data volume, and overall throughput. Please note that the stats values only consider what is seen on the Store API: - There might be additional time spent by the caller, outside of Store, thus: - Real time is longer. - Real throughput is lower. - There are some overheads not accounted for, e.g., the volume only adds up the data size of load and store. - Write buffering or cached reads might give a wrong impression. """ st = dict(self._stats) # copy Counter -> generic dict for key in "info", "load", "store", "delete", "move", "list": # make sure key is present, even if method was not called st[f"{key}_calls"] = st.get(f"{key}_calls", 0) # convert integer ns timings to float s st[f"{key}_time"] = st.get(f"{key}_time", 0) / 1e9 for key in "load", "store": v = st.get(f"{key}_volume", 0) t = st.get(f"{key}_time", 0) st[f"{key}_throughput"] = v / t return st def _get_levels(self, name): """Get levels from the configuration depending on the namespace.""" for namespace, levels in self.levels: if name.startswith(namespace): return levels # Store.create_levels requires all namespaces to be configured in self.levels. raise KeyError(f"no matching namespace found for: {name}") def find(self, name: str, *, deleted=False) -> str: """ Find an item checking all supported nesting levels and return its nested name: - item not in the store yet: we won't find it, but find will return a nested name for **last** level. - item is in the store already: find will return the same nested name as the already present item. If deleted is True, find will try to find a "deleted" item. """ nested_name = None suffix = DEL_SUFFIX if deleted else None for level in self._get_levels(name): nested_name = nest(name, level, add_suffix=suffix) info = self.backend.info(nested_name) if info.exists: break return nested_name def info(self, name: str, *, deleted=False) -> ItemInfo: with self._stats_updater("info", f"info({name!r}, deleted={deleted})"): return self.backend.info(self.find(name, deleted=deleted)) def load(self, name: str, *, size=None, offset=0, deleted=False) -> bytes: with self._stats_updater("load", f"load({name!r}, offset={offset}, size={size}, deleted={deleted})"): result = self.backend.load(self.find(name, deleted=deleted), size=size, offset=offset) self._stats_update_volume("load", len(result)) return result def store(self, name: str, value: bytes) -> None: # note: using .find here will: # - overwrite an existing item (level stays same) # - write to the last level if no existing item is found. with self._stats_updater("store", f"store({name!r})"): self.backend.store(self.find(name), value) self._stats_update_volume("store", len(value)) def delete(self, name: str, *, deleted=False) -> None: """ Really and immediately deletes an item. See also .move(name, delete=True) for "soft" deletion. """ with self._stats_updater("delete", f"delete({name!r}, deleted={deleted})"): self.backend.delete(self.find(name, deleted=deleted)) def move( self, name: str, new_name: Optional[str] = None, *, delete: bool = False, undelete: bool = False, change_level: bool = False, deleted: bool = False, ) -> None: if delete: # use case: keep name, but soft "delete" the item nested_name = self.find(name, deleted=False) nested_new_name = nested_name + DEL_SUFFIX msg = f"soft_delete({name!r}, deleted={deleted})" elif undelete: # use case: keep name, undelete a previously soft "deleted" item nested_name = self.find(name, deleted=True) nested_new_name = nested_name.removesuffix(DEL_SUFFIX) msg = f"soft_undelete({name!r}, deleted={deleted})" elif change_level: # use case: keep name, changing to another nesting level suffix = DEL_SUFFIX if deleted else None nested_name = self.find(name, deleted=deleted) nested_new_name = nest(name, self._get_levels(name)[-1], add_suffix=suffix) msg = f"change_level({name!r}, deleted={deleted})" else: # generic use (be careful!) if not new_name: raise ValueError("Generic move requires new_name to be given.") nested_name = self.find(name, deleted=deleted) nested_new_name = self.find(new_name, deleted=deleted) msg = f"rename({name!r}, {new_name!r}, deleted={deleted})" with self._stats_updater("move", msg + f" [{nested_name!r}, {nested_new_name!r}]"): self.backend.move(nested_name, nested_new_name) def list(self, name: str, deleted: bool = False) -> Iterator[ItemInfo]: """ List all names in the namespace . If deleted is False (default), only non-deleted items are yielded. If deleted is True, only soft-deleted items are yielded. backend.list giving us sorted names implies Store.list is also sorted, if all items are stored on the same level. """ # we need this wrapper due to the recursion - we only want to increment list_calls once: logger.debug(f"borgstore: list_start({name!r}, deleted={deleted})") self._stats["list_calls"] += 1 count = 0 try: for info in self._list(name, deleted=deleted): count += 1 yield info finally: # note: as this is a generator, we do not measure the execution time because # that would include the time needed by the caller to process the infos. logger.debug(f"borgstore: list_end({name!r}, deleted={deleted}) -> {count}") def _list(self, name: str, deleted: bool = False) -> Iterator[ItemInfo]: # as the backend.list method only supports non-recursive listing and # also returns directories/namespaces we introduced for nesting, we do the # recursion here (and also we do not yield directory names from here). start = time.perf_counter_ns() backend_list_iterator = self.backend.list(name) if self.latency: # we add the simulated latency once per backend.list iteration, not per element. time.sleep(self.latency) end = time.perf_counter_ns() self._stats["list_time"] += end - start while True: start = time.perf_counter_ns() try: info = next(backend_list_iterator) except StopIteration: break finally: end = time.perf_counter_ns() self._stats["list_time"] += end - start if info.directory: # note: we only expect subdirectories from key nesting, but not namespaces nested into each other. subdir_name = (name + "/" + info.name) if name else info.name yield from self._list(subdir_name, deleted=deleted) else: is_deleted = info.name.endswith(DEL_SUFFIX) if deleted and is_deleted: yield info._replace(name=info.name.removesuffix(DEL_SUFFIX)) elif not deleted and not is_deleted: yield info ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3644855 borgstore-0.4.0/src/borgstore/utils/0000755000076500000240000000000015155516137016044 5ustar00twstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1757407943.0 borgstore-0.4.0/src/borgstore/utils/__init__.py0000644000076500000240000000004515057765307020162 0ustar00twstaff"""Utility helpers for BorgStore.""" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773543560.0 borgstore-0.4.0/src/borgstore/utils/nesting.py0000644000076500000240000000521615155420210020053 0ustar00twstaff""" Nest/un-nest names to address directory scalability issues and handle the suffix for deleted items. Many filesystem directory implementations do not cope well with extremely large numbers of entries, so we introduce intermediate directories to reduce the number of entries per directory. The name is expected to have the key as the last element, for example: name = "namespace/0123456789abcdef" # often, the key is hex(hash(content)) As we can have a huge number of keys, we could nest 2 levels deep: nested_name = nest(name, 2) nested_name == "namespace/01/23/0123456789abcdef" Note that the final element is the full key — this is better to deal with in case of errors (for example, a filesystem issue and items being pushed to lost+found) and also easier to handle (e.g., a directory listing directly yields keys without needing to reassemble the full key from parent directories and partial keys). Also, a sorted directory listing has the same order as a sorted key list. name = unnest(nested_name, namespace="namespace") # a namespace with a final slash is also supported name == "namespace/0123456789abcdef" Notes: - It works the same way without a namespace, but we recommend always using a namespace. - Always use nest/unnest, even if levels == 0 are desired, as they also perform some checks and handle adding/removing a suffix. """ from typing import Optional def split_key(name: str) -> tuple[Optional[str], str]: namespace_key = name.rsplit("/", 1) if len(namespace_key) == 2: namespace, key = namespace_key else: # == 1 (no slash in name) namespace, key = None, name return namespace, key def nest(name: str, levels: int, *, add_suffix: Optional[str] = None) -> str: """namespace/12345678 --2 levels--> namespace/12/34/12345678""" if levels > 0: namespace, key = split_key(name) parts = [key[2 * level : 2 * level + 2] for level in range(levels)] parts.append(key) if namespace is not None: parts.insert(0, namespace) name = "/".join(parts) return (name + add_suffix) if add_suffix else name def unnest(name: str, namespace: str, *, remove_suffix: Optional[str] = None) -> str: """namespace/12/34/12345678 --namespace=namespace--> namespace/12345678""" if namespace: if not namespace.endswith("/"): namespace += "/" if not name.startswith(namespace): raise ValueError(f"name {name} does not start with namespace {namespace}") name = name.removeprefix(namespace) key = name.rsplit("/", 1)[-1] if remove_suffix: key = key.removesuffix(remove_suffix) return namespace + key ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1773575263.3647501 borgstore-0.4.0/src/borgstore.egg-info/0000755000076500000240000000000015155516137016376 5ustar00twstaff././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773575263.0 borgstore-0.4.0/src/borgstore.egg-info/PKG-INFO0000644000076500000240000003422415155516137017500 0ustar00twstaffMetadata-Version: 2.4 Name: borgstore Version: 0.4.0 Summary: key/value store Author-email: Thomas Waldmann License-Expression: BSD-3-Clause Project-URL: Homepage, https://github.com/borgbackup/borgstore Keywords: kv,key/value,store Classifier: Development Status :: 3 - Alpha Classifier: Intended Audience :: Developers Classifier: Operating System :: POSIX Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: 3.14 Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: Software Development :: Libraries :: Python Modules Requires-Python: >=3.10 Description-Content-Type: text/x-rst License-File: LICENSE.rst Provides-Extra: rest Requires-Dist: requests>=2.25.1; extra == "rest" Provides-Extra: rclone Requires-Dist: requests>=2.25.1; extra == "rclone" Provides-Extra: sftp Requires-Dist: paramiko>=1.9.1; extra == "sftp" Provides-Extra: s3 Requires-Dist: boto3; extra == "s3" Provides-Extra: none Dynamic: license-file BorgStore ========= A key/value store implementation in Python, supporting multiple backends. Keys ---- A key (str) can look like: - 0123456789abcdef... (usually a long, hex-encoded hash value) - Any other pure ASCII string without '/', '..', or spaces. Namespaces ---------- To keep things separate, keys should be prefixed with a namespace, such as: - config/settings - meta/0123456789abcdef... - data/0123456789abcdef... Please note: 1. You should always use namespaces. 2. Nested namespaces like namespace1/namespace2/key are not supported. 3. The code can work without a namespace (empty namespace ""), but then you can't add another namespace later, because that would create nested namespaces. Values ------ Values can be any arbitrary binary data (bytes). Store Operations ---------------- The high-level Store API implementation transparently deals with nesting and soft deletion, so the caller doesn't need to care much about that, and the backend API can be much simpler: - create/destroy: initialize or remove the whole store. - list: flat list of the items in the given namespace (by default, only non-deleted items; optionally, only soft-deleted items). - store: write a new item into the store (providing its key/value pair). - load: read a value from the store (given its key); partial loads specifying an offset and/or size are supported. - info: get information about an item via its key (exists, size, ...). - delete: immediately remove an item from the store (given its key). - move: implements renaming, soft delete/undelete, and moving to the current nesting level. - stats: API call counters, time spent in API methods, data volume/throughput. - latency/bandwidth emulator: can emulate higher latency (via BORGSTORE_LATENCY [us]) and lower bandwidth (via BORGSTORE_BANDWIDTH [bit/s]) than what is actually provided by the backend. Store operations (and per-op timing and volume) are logged at DEBUG log level. Automatic Nesting ----------------- For the Store user, items have names such as: - namespace/0123456789abcdef... - namespace/abcdef0123456789... If there are very many items in the namespace, this could lead to scalability issues in the backend. The Store implementation therefore offers transparent nesting, so that internally the backend API is called with names such as: - namespace/01/23/56/0123456789abcdef... - namespace/ab/cd/ef/abcdef0123456789... The nesting depth can be configured from 0 (= no nesting) to N levels and there can be different nesting configurations depending on the namespace. The Store supports operating at different nesting levels in the same namespace at the same time. When using nesting depth > 0, the backends assume that keys are hashes (contain hex digits) because some backends pre-create the nesting directories at initialization time to optimize backend performance. Soft deletion ------------- To soft-delete an item (so its value can still be read or it can be undeleted), the store just renames the item, appending ".del" to its name. Undelete reverses this by removing the ".del" suffix from the name. Some store operations provide a boolean flag "deleted" to control whether they consider soft-deleted items. Backends -------- The backend API is rather simple; one only needs to provide some very basic operations. Existing backends are listed below; more might come in the future. posixfs ~~~~~~~ Use storage on a local POSIX filesystem: - URL: ``file:///absolute/path`` - It is the caller's responsibility to convert a relative path into an absolute filesystem path. - Namespaces: directories - Values: in key-named files - Permissions: This backend can enforce a simple, test-friendly permission system and raises ``PermissionDenied`` if access is not permitted by the configuration. You provide a mapping of names (paths) to granted permission letters. Permissions apply to the exact name and all of its descendants (inheritance). If a name is not present in the mapping, its nearest ancestor is consulted, up to the empty name "" (the store root). If no mapping is provided at all, all operations are allowed. Permission letters: - ``l``: allow listing object names (directory/namespace listing) - ``r``: allow reading objects (contents) - ``w``: allow writing new objects (must not already exist) - ``W``: allow writing objects including overwriting existing objects - ``D``: allow deleting objects Operation requirements: - create(): requires ``w`` or ``W`` on the store root (``wW``) - destroy(): requires ``D`` on the store root - mkdir(name): requires ``w`` - rmdir(name): requires ``w`` or ``D`` (``wD``) - list(name): requires ``l`` - info(name): requires ``l`` (``r`` also accepted) - load(name): requires ``r`` - store(name, value): requires ``w`` for new objects, ``W`` for overwrites (``wW``) - delete(name): requires ``D`` - move(src, dst): requires ``D`` for the source and ``w``/``W`` for the destination Examples: - Read-only store (recursively): ``permissions = {"": "lr"}`` - No-delete, no-overwrite (but allow adding new items): ``permissions = {"": "lrw"}`` - Hierarchical rules: only allow listing at root, allow read/write in "dir", but only read for "dir/file": :: permissions = { "": "l", "dir": "lrw", "dir/file": "r", } To use permissions with ``Store`` and ``posixfs``, pass the mapping to Store and it will be handed to the posixfs backend: :: from borgstore import Store store = Store(url="file:///abs/path", permissions={"": "lrwWD"}) store.create() store.open() # ... store.close() sftp ~~~~ Use storage on an SFTP server: - URL: ``sftp://user@server:port/relative/path`` (strongly recommended) For users' and admins' convenience, the mapping of the URL path to the server filesystem path depends on the server configuration (home directory, sshd/sftpd config, ...). Usually the path is relative to the user's home directory. - URL: ``sftp://user@server:port//absolute/path`` As this uses an absolute path, some things become more difficult: - A user's configuration might break if a server admin moves a user's home to a new location. - Users must know the full absolute path of the space they are permitted to use. - Namespaces: directories - Values: in key-named files rclone ~~~~~~ Use storage on any of the many cloud providers `rclone `_ supports: - URL: ``rclone:remote:path`` — we just prefix "rclone:" and pass everything to the right of that to rclone; see: https://rclone.org/docs/#syntax-of-remote-paths - The implementation primarily depends on the specific remote. - The rclone binary path can be set via the environment variable ``RCLONE_BINARY`` (default: "rclone"). s3 ~~ Use storage on an S3-compliant cloud service: - URL: ``(s3|b2):[profile|(access_key_id:access_key_secret)@][scheme://hostname[:port]]/bucket/path`` The underlying backend is based on ``boto3``, so all standard boto3 authentication methods are supported: - provide a named profile (from your boto3 config), - include access key ID and secret in the URL, - or use default credentials (e.g., environment variables, IAM roles, etc.). See the `boto3 credentials documentation `_ for more details. If you're connecting to **AWS S3**, the ``[schema://hostname[:port]]`` part is optional. Bucket and path are always required. .. note:: There is a known issue with some S3-compatible services (e.g., **Backblaze B2**). If you encounter problems, try using ``b2:`` instead of ``s3:`` in the URL. - Namespaces: directories - Values: in key-named files REST (http/https) ~~~~~~~~~~~~~~~~~ Use storage on a BorgStore REST server: - URL: ``http[s]://[user:password@]host:port/`` - Namespaces: depends on backend used by the server - Values: depends on backend used by the server - Authentication: Optional Basic Auth is supported. REST Server ----------- BorgStore includes a simple REST server that can be used to provide remote access to any BorgStore backend. Running the server ~~~~~~~~~~~~~~~~~~ Run a server with a file: backend (for a local directory), using HTTP Basic Authentication:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore Accessing the server from a client ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The borgstore REST client can then access via:: http://user:pass@127.0.0.1:5618/ Permissions ~~~~~~~~~~~ The REST server, when used with the ``posixfs`` backend, supports the same permissions system as that backend (see above). If ``--permissions`` is omitted, all operations are allowed. To restrict permissions, pass a JSON-encoded permissions mapping via ``--permissions``. Examples: Read-only access:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lr"}' No-delete, no-overwrite (allow adding new items):: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lrw"}' Full access:: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "lrwWD"}' BorgBackup shortcuts ^^^^^^^^^^^^^^^^^^^^ Instead of hand-crafting a JSON mapping, you can use a named shortcut tailored for `BorgBackup `_ repositories: ``borgbackup-all`` No permission restrictions — all operations are allowed (equivalent to omitting ``--permissions``). ``borgbackup-no-delete`` Prevent deletion and overwriting of existing objects; new objects may still be added. ``borgbackup-write-only`` Clients may store new data but cannot read existing data back (except for caches and metadata that borg needs internally). ``borgbackup-read-only`` Clients may only list and read objects. Example — restrict a backup server to no-delete access: .. code-block:: bash python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \\ --username user --password pass \\ --backend file:///home/user/repos/repo1 \\ --permissions borgbackup-no-delete Custom JSON permissions ^^^^^^^^^^^^^^^^^^^^^^^ You can also pass an arbitrary JSON-encoded permissions mapping directly. Hierarchical rules (list-only at root, read/write in ``data/``):: python3 -m borgstore.server.rest --host 127.0.0.1 --port 5618 \ --username user --password pass \ --backend file:///tmp/teststore \ --permissions '{"": "l", "data": "lrw"}' Scalability ----------- - Count of key/value pairs stored in a namespace: automatic nesting is provided for keys to address common scalability issues. - Key size: there are no special provisions for extremely long keys (e.g., exceeding backend limitations). Usually this is not a problem, though. - Value size: there are no special provisions for dealing with large value sizes (e.g., more than available memory, more than backend storage limitations, etc.). If one deals with very large values, one usually cuts them into chunks before storing them in the store. - Partial loads improve performance by avoiding a full load if only part of the value is needed (e.g., a header with metadata). Installation ------------ Install without the extras: pip install borgstore pip install "borgstore[none]" # same thing (simplifies automation) Install with the ``rest:`` backend (more dependencies):: pip install "borgstore[rest]" Install with the ``sftp:`` backend (more dependencies):: pip install "borgstore[sftp]" Install with the ``s3:`` backend (more dependencies):: pip install "borgstore[s3]" Install with the ``rclone:`` backend (more dependencies):: pip install "borgstore[rclone]" Please note that ``rclone:`` also supports SFTP and S3 remotes. Want a demo? ------------ Run this to get instructions on how to run the demo:: python3 -m borgstore State of this project --------------------- **API is still unstable and expected to change as development goes on.** **As long as the API is unstable, there will be no data migration tools, such as tools for upgrading an existing store's data to a new release.** There are tests, and they pass for the basic functionality, so some functionality is already working well. There might be missing features or optimization potential. Feedback is welcome! Many possible backends are still missing. If you want to create and support one, pull requests are welcome. Borg? ----- Please note that this code is currently **not** used by the stable release of BorgBackup (also known as "borg"), but only by Borg 2 beta 10+ and the master branch. License ------- BSD license. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773575263.0 borgstore-0.4.0/src/borgstore.egg-info/SOURCES.txt0000644000076500000240000000135015155516137020261 0ustar00twstaffCHANGES.rst LICENSE.rst README.rst pyproject.toml src/borgstore/__init__.py src/borgstore/__main__.py src/borgstore/_version.py src/borgstore/constants.py src/borgstore/store.py src/borgstore.egg-info/PKG-INFO src/borgstore.egg-info/SOURCES.txt src/borgstore.egg-info/dependency_links.txt src/borgstore.egg-info/requires.txt src/borgstore.egg-info/top_level.txt src/borgstore/backends/__init__.py src/borgstore/backends/_base.py src/borgstore/backends/errors.py src/borgstore/backends/posixfs.py src/borgstore/backends/rclone.py src/borgstore/backends/rest.py src/borgstore/backends/s3.py src/borgstore/backends/sftp.py src/borgstore/server/__init__.py src/borgstore/server/rest.py src/borgstore/utils/__init__.py src/borgstore/utils/nesting.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773575263.0 borgstore-0.4.0/src/borgstore.egg-info/dependency_links.txt0000644000076500000240000000000115155516137022444 0ustar00twstaff ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773575263.0 borgstore-0.4.0/src/borgstore.egg-info/requires.txt0000644000076500000240000000014015155516137020771 0ustar00twstaff [none] [rclone] requests>=2.25.1 [rest] requests>=2.25.1 [s3] boto3 [sftp] paramiko>=1.9.1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1773575263.0 borgstore-0.4.0/src/borgstore.egg-info/top_level.txt0000644000076500000240000000001215155516137021121 0ustar00twstaffborgstore